• Title/Summary/Keyword: Check node

Search Result 135, Processing Time 0.023 seconds

On Reducing False Positives of a Bloom Filter in Trie-Based Algorithms

  • Mun, Ju Hyoung;Lim, Hyesook
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.3
    • /
    • pp.163-168
    • /
    • 2015
  • Many IP address lookup approaches employ Bloom filters to obtain a high-speed search performance. Especially, it has been recently studied that the search performance of trie-based algorithms can be significantly improved by adding Bloom filters. In such algorithms, the number of trie accesses can be greatly reduced because Bloom filters can determine whether a node exists in a trie without actually accessing the trie. Bloom filters do not have false negatives but have false positives. False positives can lead to unnecessary trie accesses. The false positive rate must thus be reduced to enhance the performance of lookup algorithms applying Bloom filters. One important characteristic of trie-based algorithms is that all the ancestors of a node are also stored. The proposed algorithm utilizes this characteristic in reducing the false positive rate of a Bloom filter without increasing the size of the memory for the Bloom filter. When a Bloom filter produces a positive result for a node of a trie, we propose to check whether the ancestors of the node are also positives. Because Bloom filters have no false negatives, the negatives of any of the ancestors mean that the positive of the node is false. In other words, we propose to use more Bloom filter queries to reduce the false positive rate of a Bloom filter in trie-based algorithms. Simulation results show that querying one ancestor of a node can reduce the false positive rate by up to 67% with exactly the same architecture and the same memory requirement. The proposed approach can be applied to other trie-based algorithms employing Bloom filters.

A Study to Apply A Fog Computing Platform (포그 컴퓨팅 플랫폼 적용성 연구)

  • Lee, Kyeong-Min;Lee, Hoo-Myeong;Jo, Min-Sung;Choi, Hoon
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.6
    • /
    • pp.60-71
    • /
    • 2019
  • As IoT systems such as smart farms and smart cities become popular, a large amount of data collected from many sensor nodes is sent to a server in the Internet, which causes network traffic explosion, delay in delivery, and increase of server's workload. To solve these problems, the concept of fog computing has been proposed to store data between IoT systems and servers. In this study, we implemented a software platform of the fog node and applied it to the prototype smart farm system in order to check whether the problems listed above can be solved when using the fog node. When the fog node is used, the time taken to control an IoT device is lower than the response time of the existing IoT device-server case. We confirmed that it can also solve the problem of the Internet traffic explosion and the workload increase in the server. We also showed that the intelligent control of IoT system is feasible by having the data visualization in the server and real time remote control, emergency notification in the fog node as well as data storage which is the basic capability of the fog node.

Complexity-Reduced Algorithms for LDPC Decoder for DVB-S2 Systems

  • Choi, Eun-A;Jung, Ji-Won;Kim, Nae-Soo;Oh, Deock-Gil
    • ETRI Journal
    • /
    • v.27 no.5
    • /
    • pp.639-642
    • /
    • 2005
  • This paper proposes two kinds of complexity-reduced algorithms for a low density parity check (LDPC) decoder. First, sequential decoding using a partial group is proposed. It has the same hardware complexity and requires a fewer number of iterations with little performance loss. The amount of performance loss can be determined by the designer, based on a tradeoff with the desired reduction in complexity. Second, an early detection method for reducing the computational complexity is proposed. Using a confidence criterion, some bit nodes and check node edges are detected early on during decoding. Once the edges are detected, no further iteration is required; thus early detection reduces the computational complexity.

  • PDF

On Combining Chase-2 and Sum-Product Algorithms for LDPC Codes

  • Tong, Sheng;Zheng, Huijuan
    • ETRI Journal
    • /
    • v.34 no.4
    • /
    • pp.629-632
    • /
    • 2012
  • This letter investigates the combination of the Chase-2 and sum-product (SP) algorithms for low-density parity-check (LDPC) codes. A simple modification of the tanh rule for check node update is given, which incorporates test error patterns (TEPs) used in the Chase algorithm into SP decoding of LDPC codes. Moreover, a simple yet effective approach is proposed to construct TEPs for dealing with decoding failures with low-weight syndromes. Simulation results show that the proposed algorithm is effective in improving both the waterfall and error floor performance of LDPC codes.

Split LDPC Codes for Hybrid ARQ

  • Joo, Hyeong-Gun;Hong, Song-Nam;Shin, Dong-Joon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.10C
    • /
    • pp.942-949
    • /
    • 2007
  • In this paper, we propose a new rate-control scheme, called splining, to construct low-rate codes from high-rate codes by splitting rows of the parity-check matrices of LDPC codes, which can construct rate-compatible LDPC codes having good initial transmission performance. Good low-rate codes can be constructed by making the number of distinct check node degrees as small as possible after splitting. The proposed scheme achieves good cycle property, low decoding complexity, and fast convergence speed, especially compared to the puncturing. Especially, rate-compatible repeat accumulate-type LDPC (RA-Type LDPC) code is constructed using splitting, which covers the code rates from 1/3 to 4/5. Through simulation it is shown that this code outperforms other rate-compatible RA-Type LDPC codes for all rates and can be decoded conveniently and efficiently.

Implementation of Intelligent Home Network and u-Healthcare System based on Smart-Grid

  • Kim, Tae Yeun;Bae, Sang Hyun
    • Journal of Integrative Natural Science
    • /
    • v.9 no.3
    • /
    • pp.199-205
    • /
    • 2016
  • In this paper, we established ZIGBEE home network and combined smart-grid and u-Healthcare system. We assisted for amount of electricity management of household by interlocking home devices of wireless sensor, PLC modem, DCU and realized smart grid and u-Healthcare at the same time by verifying body heat, pulse, blood pressure change and proceeded living body signal by using SVM algorithm and variety of ZIGBEE network channel and enabled it to check real-time through IHD which is developed by user interface. In addition, we minimized the rate of energy consumption of each sensor node when living body signal is processed and realized Query Processor which is able to optimize accuracy and speed of query. We were able to check the result that is accuracy of classification 0.848 which is less accounting for average 17.9% of storage more than the real input data by using Mjoin, multiple query process and SVM algorithm.

Automatic Quadrilateral Mesh Generation Using Updated Paving Technique in Various Two Dimensional Objects (다양한 2차원 영역에서의 향상된 Paving법을 이용한 자동 사각 요소 생성)

  • Yang, Hyun-Ik;Kim, Myung-Han
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.27 no.10
    • /
    • pp.1762-1771
    • /
    • 2003
  • In part of mechanical design analysis, quadrilateral mesh is usually used because it provides less approximate errors than triangular mesh. Over the decades, Paving method has been considered as the most robust method among existing automatic quadrilateral element mesh generation methods. However, it also has some problems such as unpredictable node projection and relatively large element generation. In this study, the aforementioned problems are corrected by updating the Paving method. In so doing, a part of node projection process is modified by classifying nodes based on the interior angles. The closure check process is also modified by adding more nodes while generating elements. The result shows well shaped element distribution in the final mesh without any aforementioned problems.

Efficient Design of Structured LDPC Codes (구조적 LDPC 부호의 효율적인 설계)

  • Chung Bi-Woong;Kim Joon-Sung;Song Hong-Yeop
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.14-19
    • /
    • 2006
  • The high encoding complexity of LDPC codes can be solved by designing structured parity-check matrix. If the parity-check matrix of LDPC codes is composed of same type of blocks, decoder implementation can be simple, this structure allow structured decoding and required memory for storing the parity-check matrix can be reduced largely. In this parer, we propose a construction algorithm for short block length structured LDPC codes based on girth condition, PEG algorithm and variable node connectivity. The code designed by this algorithm shows similar performance to other codes without structured constraint in low SNR and better performance in high SNR than those by simulation

Simplified 2-Dimensional Scaled Min-Sum Algorithm for LDPC Decoder

  • Cho, Keol;Lee, Wang-Heon;Chung, Ki-Seok
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.3
    • /
    • pp.1262-1270
    • /
    • 2017
  • Among various decoding algorithms of low-density parity-check (LDPC) codes, the min-sum (MS) algorithm and its modified algorithms are widely adopted because of their computational simplicity compared to the sum-product (SP) algorithm with slight loss of decoding performance. In the MS algorithm, the magnitude of the output message from a check node (CN) processing unit is decided by either the smallest or the next smallest input message which are denoted as min1 and min2, respectively. It has been shown that multiplying a scaling factor to the output of CN message will improve the decoding performance. Further, Zhong et al. have shown that multiplying different scaling factors (called a 2-dimensional scaling) to min1 and min2 much increases the performance of the LDPC decoder. In this paper, the simplified 2-dimensional scaled (S2DS) MS algorithm is proposed. In the proposed algorithm, we figure out a pair of the most efficient scaling factors which multiplications can be replaced with combinations of addition and shift operations. Furthermore, one scaling operation is approximated by the difference between min1 and min2. The simulation results show that S2DS achieves the error correcting performance which is close to or outperforms the SP algorithm regardless of coding rates, and its computational complexity is the lowest comparing to modified versions of MS algorithms.

Enhancing Security in Mobile IPv6

  • Modares, Hero;Moravejosharieh, Amirhossein;Salleh, Rosli Bin;Lloret, Jaime
    • ETRI Journal
    • /
    • v.36 no.1
    • /
    • pp.51-61
    • /
    • 2014
  • In the Mobile IPv6 (MIPv6) protocol, a mobile node (MN) is a mobile device with a permanent home address (HoA) on its home link. The MN will acquire a care-of address (CoA) when it roams into a foreign link. It then sends a binding update (BU) message to the home agent (HA) and the correspondent node (CN) to inform them of its current CoA so that future data packets destined for its HoA will be forwarded to the CoA. The BU message, however, is vulnerable to different types of security attacks, such as the man-in-the-middle attack, the session hijacking attack, and the denial-of-service attack. The current security protocols in MIPv6 are not able to effectively protect the BU message against these attacks. The private-key-based BU (PKBU) protocol is proposed in this research to overcome the shortcomings of some existing MIPv6 protocols. PKBU incorporates a method to assert the address ownership of the MN, thus allowing the CN to validate that the MN is not a malicious node. The results obtained show that it addresses the security requirements while being able to check the address ownership of the MN. PKBU also incorporates a method to verify the reachability of the MN.