• Title/Summary/Keyword: Multiple hash

Search Result 64, Processing Time 0.025 seconds

Design and Implementation of High-Speed Pattern Matcher Using Multi-Entry Simultaneous Comparator in Network Intrusion Detection System (네트워크 침입 탐지 시스템에서 다중 엔트리 동시 비교기를 이용한 고속패턴 매칭기의 설계 및 구현)

  • Jeon, Myung-Jae;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2169-2177
    • /
    • 2015
  • This paper proposes a new pattern matching module to overcome the increased runtime of previous algorithm using RAM, which was designed to overcome cost limitation of hash-based algorithm using CAM (Content Addressable Memory). By adopting Merge FSM algorithm to reduce the number of state, the proposed module contains state block and entry block to use in RAM. In the proposed module, one input string is compared with multiple entry strings simultaneously using entry block. The effectiveness of the proposed pattern matching unit is verified by executing Snort 2.9 rule set. Experimental results show that the number of memory reads has decreased by 15.8%, throughput has increased by 47.1%, while memory usage has increased by 2.6%, when compared to previous methods.

A Design and Implementation Vessel USN Middleware of Server-Side Method based on Context Aware (Server-Side 방식의 상황 인식 기반 선박 USN 미들웨어 구현 및 설계)

  • Song, Byoung-Ho;Song, Iick-Ho;Kim, Jong-Hwa;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.116-124
    • /
    • 2011
  • In this paper, We implemented vessel USN middleware by server-side method considering characteristics of ocean environment. We designed multiple query process module in order to efficient process multidimensional sensor stream data and proposed optimized query plan using Mjoin query and hash table. This paper proposed method that context aware of vessel and manage considering characteristics of ocean. We decided to risk context using SVM algorithm in context awareness management module. As a result, we obtained about 87.5% average accuracy for fire case and about 85.1% average accuracy for vessel risk case by input 5,000 data sets and implemented vessel USN monitoring system.

Quorum-based Key Management Scheme in Wireless Sensor Networks

  • Wuu, Lih-Chyau;Hung, Chi-Hsiang;Chang, Chia-Ming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.9
    • /
    • pp.2442-2454
    • /
    • 2012
  • To ensure the security of wireless sensor networks, it is important to have a robust key management scheme. In this paper, we propose a Quorum-based key management scheme. A specific sensor, called as key distribution server (KDS), generates a key matrix and establishes a quorum system from the key matrix. The quorum system is a set system of subsets that the intersection of any two subsets is non-empty. In our scheme, each sensor is assigned a subset of the quorum system as its pre-distributed keys. Whenever any two sensors need a shared key, they exchange their IDs, and then each sensor by itself finds a common key from its assigned subset. A shared key is then generated by the two sensors individually based on the common key. By our scheme, no key is needed to be refreshed as a sensor leaves the network. Upon a sensor joining the network, the KDS broadcasts a message containing the joining sensor ID. After receiving the broadcast message, each sensor updates the key which is in common with the new joining one. Only XOR and hash operations are required to be executed during key update process, and each sensor needs to update one key only. Furthermore, if multiple sensors would like to have a secure group communication, the KDS broadcasts a message containing the partial information of a group key, and then each sensor in the group by itself is able to restore the group key by using the secret sharing technique without cooperating with other sensors in the group.

Efficient Tag Authentication Scheme using Tag ID Identification Bits in RFID Environment (RFID 환경에서 태그 ID의 식별 비트를 이용한 효율적인 태그 인증 기법)

  • Jang, Bong-Im;Jeong, Yoon-Su;Kim, Yong-Tae;Park, Gil-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.1
    • /
    • pp.195-202
    • /
    • 2011
  • RFID(Radio Frequency IDentification) is a system to identify objects and its usage is being extended to distribution, healthcare, and air&port etc. RFID is a contactless system environment, and reducing tag authentication time is important because multiple tags are identified at the same time. Studies about RFID system so far is, however, mostly to improve security vulnerability in the tag authentication process. Therefore, this paper suggests an efficient scheme to decrease the time of tag authentication which is also safe for the security of tag authentication process. The proposed scheme cuts down on the tag ID search time because it searches only the classified relevant ID in the database, which is one of many components of RFID system, by using identification bits for tag ID search. Consequently, the suggested scheme decreases process time for tag ID authentication by reducing the processing time and the load of the database. It also brings performance improvement of RFID system as it improves the energy applicability of passive tag.

Two-Dimensional Binary Search on Length Using Bloom Filter for Packet Classification (블룸 필터를 사용한 길이에 대한 2차원 이진검색 패킷 분류 알고리즘)

  • Choe, Young-Ju;Lim, Hye-Sook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4B
    • /
    • pp.245-257
    • /
    • 2012
  • As one of the most challenging tasks in designing the Internet routers, packet classification is required to achieve the wire-speed processing for every incoming packet. Packet classification algorithm which applies binary search on trie levels to the area-based quad-trie is an efficient algorithm. However, it has a problem of unnecessary access to a hash table, even when there is no node in the corresponding level of the trie. In order to avoid the unnecessary off-chip memory access, we proposed an algorithm using Bloom filters along with the binary search on levels to multiple disjoint tries. For ACL, FW, IPC sets with about 1000, 5000, and 10000 rules, performance evaluation result shows that the search performance is improved by 21 to 33 percent by adding Bloom filters.

A Data Mining Approach for Selecting Bitmap Join Indices

  • Bellatreche, Ladjel;Missaoui, Rokia;Necir, Hamid;Drias, Habiba
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.2
    • /
    • pp.177-194
    • /
    • 2007
  • Index selection is one of the most important decisions to take in the physical design of relational data warehouses. Indices reduce significantly the cost of processing complex OLAP queries, but require storage cost and induce maintenance overhead. Two main types of indices are available: mono-attribute indices (e.g., B-tree, bitmap, hash, etc.) and multi-attribute indices (join indices, bitmap join indices). To optimize star join queries characterized by joins between a large fact table and multiple dimension tables and selections on dimension tables, bitmap join indices are well adapted. They require less storage cost due to their binary representation. However, selecting these indices is a difficult task due to the exponential number of candidate attributes to be indexed. Most of approaches for index selection follow two main steps: (1) pruning the search space (i.e., reducing the number of candidate attributes) and (2) selecting indices using the pruned search space. In this paper, we first propose a data mining driven approach to prune the search space of bitmap join index selection problem. As opposed to an existing our technique that only uses frequency of attributes in queries as a pruning metric, our technique uses not only frequencies, but also other parameters such as the size of dimension tables involved in the indexing process, size of each dimension tuple, and page size on disk. We then define a greedy algorithm to select bitmap join indices that minimize processing cost and verify storage constraint. Finally, in order to evaluate the efficiency of our approach, we compare it with some existing techniques.

Biometric Information and OTP based on Authentication Mechanism using Blockchain (블록체인을 이용한 생체정보와 OTP 기반의 안전한 인증 기법)

  • Mun, Hyung-Jin
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.3
    • /
    • pp.85-90
    • /
    • 2018
  • Blockchain technology provides distributed trust structure; with this, we can implement a system that cannot be forged and make Smart Contract possible. With blockchain technology emerging as next generation security technology, there have been studies on authentication and security services that ensure integrity. Although Internet-based services have been going with user authentication with password, the information can be stolen through a client and a network and the server is exposed to hacking. For the reason, we suggest blockchain technology and OTP based authentication mechanism to ensure integrity. In particular, the Two-Factor Authentication is able to ensure secure authentication by combining OTP authentication and biometric authentication without using password. As the suggested authentication applies multiple hash functions and generates transactions to be placed in blocks in order for biometric information not to be identified, it is protected from server attacks by being separate from the server.

Secure PIN Authentication Technique in Door-Lock Method to Prevent Illegal Intrusion into Private Areas (사적 영역에 불법 침입 방지를 위한 도어락 방식의 안전한 PIN 인증 기법)

  • Hyung-Jin Mun
    • Journal of Practical Engineering Education
    • /
    • v.16 no.3_spc
    • /
    • pp.327-332
    • /
    • 2024
  • The spread of smart phones provides users with a variety of services, making their lives more convenient. In particular, financial transactions can be easily made online after user authentication using a smart phone. Users easily access the service by authenticating using a PIN, but this makes them vulnerable to social engineering attacks such as spying or recording. We aim to increase security against social engineering attacks by applying the authentication method including imaginary numbers when entering a password at the door lock to smart phones. Door locks perform PIN authentication within the terminal, but in smart phones, PIN authentication is handled by the server, so there is a problem in transmitting PIN information safely. Through the proposed technique, multiple PINs containing imaginary numbers are generated and transmitted as processed values such as hash values, thereby ensuring the stability of transmission and enabling safe user authentication through a technique that allows the PIN to be entered without exposure.

Enhancing Installation Security for Naval Combat Management System through Encryption and Validation Research

  • Byeong-Wan Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.121-130
    • /
    • 2024
  • In this paper, we propose an installation approach for Naval Combat Management System(CMS) software that identifies potential data anomalies during installation. With the popularization of wireless communication methods, such as Low Earth Orbit(LEO) satellite communications, various utilization methods using wireless networks are being discussed in CMS. One of these methods includes the use of wireless network communications for installation, which is expected to enhance the real-time performance of the CMS. However, wireless networks are relatively more vulnerable to security threats compared to wired networks, necessitating additional security measures. This paper presents a method where files are transmitted to multiple nodes using encryption, and after the installation of the files, a validity check is performed to determine if there has been any tampering or alteration during transmission, ensuring proper installation. The feasibility of applying the proposed method to Naval Combat Systems is demonstrated by evaluating transmission performance, security, and stability, and based on these evaluations, results sufficient for application to CMS have been derived.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).