• Title/Summary/Keyword: Erasure network

Search Result 22, Processing Time 0.027 seconds

Throughput Scaling Law of Hybrid Erasure Networks Based on Physical Model (물리적 모델 기반 혼합 소거 네트워크의 용량 스케일링 법칙)

  • Shin, Won-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.57-62
    • /
    • 2014
  • The benefits of infrastructure support are shown by analyzing a throughput scaling law of an erasure network in which multiple relay stations (RSs) are regularly placed. Based on suitably modeling erasure probabilities under the assumed network, we show our achievable network throughput in the hybrid erasure network. More specifically, we use two types of physical models, a exponential decay model and a polynomial decay model. Then, we analyze our achievable throughput using two existing schemes including multi-hop transmissions with and without help of RSs. Our result indicates that for both physical models, the derived throughput scaling law depends on the number of nodes and the number of RSs.

A Family of Concatenated Network Codes for Improved Performance With Generations

  • Thibault, Jean-Pierre;Chan, Wai-Yip;Yousefi, Shahram
    • Journal of Communications and Networks
    • /
    • v.10 no.4
    • /
    • pp.384-395
    • /
    • 2008
  • Random network coding can be viewed as a single block code applied to all source packets. To manage the concomitant high coding complexity, source packets can be partitioned into generations; block coding is then performed on each set. To reach a better performance-complexity tradeoff, we propose a novel concatenated network code which mixes generations while retaining the desirable properties of generation-based coding. Focusing on the code's erasure performance, we show that the probability of successfully decoding a generation on erasure channels can increase substantially for any erasure rate. Using both analysis (for small networks) and simulations (for larger networks), we show how the code's parameters can be tuned to extract best performance. As a result, the probability of failing to decode a generation is reduced by nearly one order of magnitude.

Effect of Random Node Distribution on the Throughput in Infrastructure-Supported Erasure Networks (인프라구조 도움을 받는 소거 네트워크에서 용량에 대한 랜덤 노드 분포의 효과)

  • Shin, Won-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.911-916
    • /
    • 2016
  • The nearest-neighbor multihop routing with/without infrastructure support is known to achieve the optimal capacity scaling in a large packet-erasure network in which multiple wireless nodes and relay stations are regularly placed and packets are erased with a certain probability. In this paper, a throughput scaling law is shown for an infrastructure-supported erasure network where wireless nodes are randomly distributed, which is a more feasible scenario. We use an exponential decay model to suitably model an erasure probability. To achieve high throughput in hybrid random erasure networks, the multihop routing via highway using the percolation theory is proposed and the corresponding throughput scaling is derived. As a main result, the proposed percolation highway based routing scheme achieves the same throughput scaling as the nearest-neighbor multihop case in hybrid regular erasure networks. That is, it is shown that no performance loss occurs even when nodes are randomly distributed.

A Packet Loss Control Scheme based on Network Conditions and Data Priority (네트워크 상태와 데이타 중요도에 기반한 패킷 손실 제어 기법)

  • Park, Tae-Uk;Chung, Ki-Dong
    • Journal of KIISE:Information Networking
    • /
    • v.31 no.1
    • /
    • pp.1-10
    • /
    • 2004
  • This study discusses Application-layer FEC using erasure codes. Because of the simple decoding process, erasure codes are used effectively in Application-layer FEC to deal with Packet-level errors. The large number of parity packets makes the loss rate to be small, but causes the network congestion to be worse. Thus, a redundancy control algorithm that can adjust the number of parity packets depending on network conditions is necessary. In addition, it is natural that high-priority frames such as I frames should produce more parity packets than low-priority frames such as P and B frames. In this paper, we propose a redundancy control algorithm that can adjust the amount of redundancy depending on the network conditions and depending on data priority, and test the performance in simple links and congestion links.

A Study on the Adaptive Erasure Node Algorithm for the DQDB Metropolitan Area Network (DQDB MAN을 위한 적응 소거노드 알고리듬에 관한 연구)

  • 김덕환;한치문;김대영
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.30A no.5
    • /
    • pp.1-15
    • /
    • 1993
  • In DQDB networks, the bandwidth can be increased considerably be using the EN(Erasure Node) algorithms and DR(Destination Release) algorithms. However, the important issue in implementing them is using method of extra capacity fairly. To improve it, this paper proposes AEN(Adaptive Erasure Node) algorithm which erasure function is activated by network traffic load. Its functional architecture consists of SESM, RCSM, LMSM in addition to the basic DQDB state machines (DQSM, RQM). The SESM and RCSM state machines are placed in front of the DQSM and RQM state machines in order for the node to take advantage of the newly cleared slots. This paper also presents some simulation results showing the effect of AEN algorithm on access delay, throughput and segment erasing ratio in the single and multiple priority networks. The results show that the AEN algorithm offer the better performance characteristics than existing algorithms under overload conditions.

  • PDF

Packet-Level Scheduling for Implant Communications Using Forward Error Correction in an Erasure Correction Mode for Reliable U-Healthcare Service

  • Lee, Ki-Dong;Kim, Sang-G.;Yi, Byung-K.
    • Journal of Communications and Networks
    • /
    • v.13 no.2
    • /
    • pp.160-166
    • /
    • 2011
  • In u-healthcare services based on wireless body sensor networks, reliable connection is very important as many types of information, including vital signals, are transmitted through the networks. The transmit power requirements are very stringent in the case of in-body networks for implant communication. Furthermore, the wireless link in an in-body environment has a high degree of path loss (e.g., the path loss exponent is around 6.2 for deep tissue). Because of such inherently bad settings of the communication nodes, a multi-hop network topology is preferred in order to meet the transmit power requirements and to increase the battery lifetime of sensor nodes. This will ensure that the live body of a patient receiving the healthcare service has a reduced level of specific absorption ratio (SAR) when exposed to long-lasting radiation. We propose an efficientmethod for delivering delay-intolerant data packets over multiple hops. We consider forward error correction (FEC) in an erasure correction mode and develop a mathematical formulation for packet-level scheduling of delay-intolerant FEC packets over multiple hops. The proposed method can be used as a simple guideline for applications to setting up a topology for a medical body sensor network of each individual patient, which is connected to a remote server for u-healthcare service applications.

The Design of Regenerating Codes with a Varying Number of Helper Nodes (다양한 도움 노드의 수를 가지는 재생 부호의 설계)

  • Lee, Hyuk;Lee, Jungwoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.12
    • /
    • pp.1684-1691
    • /
    • 2016
  • Erasure codes have recently been applied to distributed storage systems due to their high storage efficiency. Regenerating codes are a kind of erasure codes, which are optimal in terms of minimum repair bandwidth. An (n,k,d)-regenerating code consists of n storage nodes where a failed node can be recovered with the help of the exactly d numbers of surviving nodes. However, if node failures occur frequently or network connection is unstable, the number of helper nodes that a failed node can contact may be smaller than d. In such cases, regenerating codes cannot repair the failed nodes efficiently since the node repair process of the codes does not work when the number of helper nodes is less than d. In this paper, we propose an operating method of regenerating codes where a failed node can be repaired from ${\bar{d}}$ helper nodes where $$k{\leq_-}{\bar{d}}{\leq_-}d$$.

Torus Network Based Distributed Storage System for Massive Multimedia Contents (토러스 연결망 기반의 대용량 멀티미디어용 분산 스토리지 시스템)

  • Kim, Cheiyol;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1487-1497
    • /
    • 2016
  • Explosively growing service of digital multimedia data increases the need for highly scalable low-cost storage. This paper proposes the new storage architecture based on torus network which does not need network switch and erasure coding for efficient storage usage for high scalability and efficient disk utilization. The proposed model has to compensate for the disadvantage of long network latency and network processing overhead of torus network. The proposed storage model was compared to two most popular distributed file system, GlusterFS and Ceph distributed file systems through a prototype implementation. The performance of prototype system shows outstanding results than erasure coding policy of two file systems and mostly even better results than replication policy of them.

Reliable Data Transmission Based on Erasure-resilient Code in Wireless Sensor Networks

  • Lei, Jian-Jun;Kwon, Gu-In
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.1
    • /
    • pp.62-77
    • /
    • 2010
  • Emerging applications with high data rates will need to transport bulk data reliably in wireless sensor networks. ARQ (Automatic Repeat request) or Forward Error Correction (FEC) code schemes can be used to provide reliable transmission in a sensor network. However, the naive ARQ approach drops the whole frame, even though there is a bit error in the frame and the FEC at the bit level scheme may require a highly complex method to adjust the amount of FEC redundancy. We propose a bulk data transmission scheme based on erasure-resilient code in this paper to overcome these inefficiencies. The sender fragments bulk data into many small blocks, encodes the blocks with LT codes and packages several such blocks into a frame. The receiver only drops the corrupted blocks (compared to the entire frame) and the original data can be reconstructed if sufficient error-free blocks are received. An incidental benefit is that the frame error rate (FER) becomes irrelevant to frame size (error recovery). A frame can therefore be sufficiently large to provide high utilization of the wireless channel bandwidth without sacrificing the effectiveness of error recovery. The scheme has been implemented as a new data link layer in TinyOS, and evaluated through experiments in a testbed of Zigbex motes. Results show single hop transmission throughput can be improved by at least 20% under typical wireless channel conditions. It also reduces the transmission time of a reasonable range of size files by more than 30%, compared to a frame ARQ scheme. The total number of bytes sent by all nodes in the multi-hop communication is reduced by more than 60% compared to the frame ARQ scheme.

Practical Schemes for Tunable Secure Network Coding

  • Liu, Guangjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1193-1209
    • /
    • 2015
  • Network coding is promising to maximize network throughput and improve the resilience to random network failures in various networking systems. In this paper, the problem of providing efficient confidentiality for practical network coding system against a global eavesdropper (with full eavesdropping capabilities to the network) is considered. By exploiting a novel combination between the construction technique of systematic Maximum Distance Separable (MDS) erasure coding and traditional cryptographic approach, two efficient schemes are proposed that can achieve the maximum possible rate and minimum encryption overhead respectively on top of any communication network or underlying linear network code. Every generation is first subjected to an encoding by a particular matrix generated by two (or three) Vandermonde matrices, and then parts of coded vectors (or secret symbols) are encrypted before transmitting. The proposed schemes are characterized by tunable and measurable degrees of security and also shown to be of low overhead in computation and bandwidth.