• Title/Summary/Keyword: Hash Algorithm

Search Result 265, Processing Time 0.032 seconds

Tag Identification Process Model with Scalability for Protecting Privacy of RFID on the Grid Environment (그리드 환경에서 RFID 프라이버시 보호를 위한 확장성을 가지는 태그 판별 처리 모델)

  • Shin, Myeong-Sook;Kim, Choong-Woon;Lee, Joon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.6
    • /
    • pp.1010-1015
    • /
    • 2008
  • The choice of RFID system is recently progressing(being) rapidly at various field. For the sake of RFID system popularization, However, We should solve privacy invasion to gain the pirated information of RFID tag. There is the safest M Ohkubos's skill among preexistent studying to solve these problems. But, this skill has a problem that demands a immense calculation capability caused an increase in tag number when we discriminate tags. So, This paper proposes the way of transplant to Grid environment for keeping Privacy Protection up and reducing the Tag Identification Time. And, We propose the Tag Identification Process Model to apply Even Division Algorithm to separate SP with same site in each node. If the proposed model works in Grid environment at once, it would reduce the time to identify tags to 1/k.

A Development of JPEG-LS Platform for Mirco Display Environment in AR/VR Device. (AR/VR 마이크로 디스플레이 환경을 고려한 JPEG-LS 플랫폼 개발)

  • Park, Hyun-Moon;Jang, Young-Jong;Kim, Byung-Soo;Hwang, Tae-Ho
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.2
    • /
    • pp.417-424
    • /
    • 2019
  • This paper presents the design of a JPEG-LS codec for lossless image compression from AR/VR device. The proposed JPEG-LS(: LosSless) codec is mainly composed of a context modeling block, a context update block, a pixel prediction block, a prediction error coding block, a data packetizer block, and a memory block. All operations are organized in a fully pipelined architecture for real time image processing and the LOCO-I compression algorithm using improved 2D approach to compliant with the SBT coding. Compared with a similar study in JPEG-LS, the Block-RAM size of proposed STB-FLC architecture is reduced to 1/3 compact and the parallel design of the predication block could improved the processing speed.

Matrix Character Relocation Technique for Improving Data Privacy in Shard-Based Private Blockchain Environments (샤드 기반 프라이빗 블록체인 환경에서 데이터 프라이버시 개선을 위한 매트릭스 문자 재배치 기법)

  • Lee, Yeol Kook;Seo, Jung Won;Park, Soo Young
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.2
    • /
    • pp.51-58
    • /
    • 2022
  • Blockchain technology is a system in which data from users participating in blockchain networks is distributed and stored. Bitcoin and Ethereum are attracting global attention, and the utilization of blockchain is expected to be endless. However, the need for blockchain data privacy protection is emerging in various financial, medical, and real estate sectors that process personal information due to the transparency of disclosing all data in the blockchain to network participants. Although studies using smart contracts, homomorphic encryption, and cryptographic key methods have been mainly conducted to protect existing blockchain data privacy, this paper proposes data privacy using matrix character relocation techniques differentiated from existing papers. The approach proposed in this paper consists largely of two methods: how to relocate the original data to matrix characters, how to return the deployed data to the original. Through qualitative experiments, we evaluate the safety of the approach proposed in this paper, and demonstrate that matrix character relocation will be sufficiently applicable in private blockchain environments by measuring the time it takes to revert applied data to original data.

A Design and Implementation of Local Festivals and Travel Information Service Application

  • Jae Hyun Ahn;Hang Ju Lee;Se Yeon Lee;Ji Won Han;Won Joo Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.65-71
    • /
    • 2023
  • In this paper, we design and implement the Walking Life Festival application, which is based on the Android platform and provides information about domestic travel destinations and regional festivals in South Korea. This application utilizes various sensors found in smartphones, including the Step Counter sensor, Step Detector sensor, Acceleration sensor, and GPS sensor. Additionally, it makes use of Google Map API and Public Open API to offer information about domestic travel destinations and local festivals. The application also incorporates an automatic login feature using the Shared Preference API. When storing login information in the database, it encrypts the input plaintext data using a hash algorithm. For Google Maps integration, it creates objects using the Google.maps.LatLngBounds() method and extends the location information through the extends method. Furthermore, this application contributes to the activation of the domestic tourism industry by notifying users about the timing of local festivals related to domestic travel destinations, thus increasing their opportunities to participate in these festivals.

New Distinguishing Attacks on Sparkle384 Reduced to 6 Rounds and Sparkle512 Reduced to 7 Rounds (6 라운드로 축소된 Sparkle384와 7 라운드로 축소된 Sparkle512에 대한 새로운 구별 공격)

  • Deukjo Hong;Donghoon Chang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.869-879
    • /
    • 2023
  • Sparkle is one of the finalists in the Lightweight Cryptography Standardization Process conducted by NIST. It is a nonlinear permutation and serves as a core component for the authenticated encryption algorithm Schwaemm and the hash function Esch. In this paper, we provide specific forms of input and output differences for 6 rounds of Sparkle384 and 7 rounds of Sparkle512, and make formulas for the complexity of finding input pairs that satisfy these differentials. Due to the significantly lower complexity compared to similar tasks for random permutations with the same input and output sizes, they can be valid distinguishing attacks. The numbers(6 and 7) of attacked rounds are very close to the minimum numbers(7 and 8) of really used rounds.

Enhancing Installation Security for Naval Combat Management System through Encryption and Validation Research

  • Byeong-Wan Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.121-130
    • /
    • 2024
  • In this paper, we propose an installation approach for Naval Combat Management System(CMS) software that identifies potential data anomalies during installation. With the popularization of wireless communication methods, such as Low Earth Orbit(LEO) satellite communications, various utilization methods using wireless networks are being discussed in CMS. One of these methods includes the use of wireless network communications for installation, which is expected to enhance the real-time performance of the CMS. However, wireless networks are relatively more vulnerable to security threats compared to wired networks, necessitating additional security measures. This paper presents a method where files are transmitted to multiple nodes using encryption, and after the installation of the files, a validity check is performed to determine if there has been any tampering or alteration during transmission, ensuring proper installation. The feasibility of applying the proposed method to Naval Combat Systems is demonstrated by evaluating transmission performance, security, and stability, and based on these evaluations, results sufficient for application to CMS have been derived.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Effective Load Shedding for Multi-Way windowed Joins Based on the Arrival Order of Tuples on Data Streams (다중 윈도우 조인을 위한 튜플의 도착 순서에 기반한 효과적인 부하 감소 기법)

  • Kwon, Tae-Hyung;Lee, Ki-Yong;Son, Jin-Hyun;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.37 no.1
    • /
    • pp.1-11
    • /
    • 2010
  • Recently, there has been a growing interest in the processing of continuous queries over multiple data streams. When the arrival rates of tuples exceed the memory capacity of the system, a load shedding technique is used to avoid the system becoming overloaded by dropping some subset of input tuples. In this paper, we propose an effective load shedding algorithm for multi-way windowed joins over multiple data streams. Most previous load shedding algorithms estimate the productivity of each tuple, i.e., the number of join output tuples produced by the tuple, based on its "join attribute value" and drop tuples with the lowest productivity. However, the productivity of a tuple cannot be accurately estimated from its join attribute value when the join attribute values are unique and do not repeat, or the distribution of the join attribute values changes over time. For these cases, we estimate the productivity of a tuple based on its "arrival order" on data streams, rather than its join attribute value. The proposed method can effectively estimate the productivity of a tuple even when the productivity of a tuple cannot be accurately estimated from its join attribute value. Through extensive experiments and analysis, we show that our proposed method outperforms the previous methods in terms of effectiveness and efficiency.

SNMPv3 Security Module Design and Implementation Using Public Key (공개키를 이용한 SNMPv3 보안 모듈 설계 및 구현)

  • Han, Ji-Hun;Park, Gyeong-Bae;Gwak, Seung-Uk;Kim, Jeong-Il;Jeong, Geun-Won;Song, In-Geun;Lee, Gwang-Bae;Kim, Hyeon-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.1
    • /
    • pp.122-133
    • /
    • 1999
  • Uses can share information and use resources effectively by using TCP/IP-based networks. So, a protocol to manage complex networks effectively is needed. For the management of the distributed networks, the SNMP(Simple Network Management Protocol) has been adopted as an international standard in 1989, and the SNMPv2 in which a security function was added was published in 1993. There are two encryption schemes in SNMPv2, the one is a DES using symmetric encryption scheme and the other is a MD5(Message Digest5) hash function for authentication. But the DES has demerits that a key length is a few short and the encryption and the authentication is executed respectively. In order to solve these problems, wer use a RSA cryptography in this paper. In this paper, we examine the items related with SNMP. In addition to DES and MD5 propose in SNMPv3, we chance security functionality by adopting RSA, a public key algorithm executing the encryption and the authentication simultaneously. The proposed SNMPv3 security module is written in JAVA under Windows NT environment.

  • PDF