• Title/Summary/Keyword: Loss Allocation

Search Result 247, Processing Time 0.028 seconds

Transmitter Beamforming and Artificial Noise with Delayed Feedback: Secrecy Rate and Power Allocation

  • Yang, Yunchuan;Wang, Wenbo;Zhao, Hui;Zhao, Long
    • Journal of Communications and Networks
    • /
    • v.14 no.4
    • /
    • pp.374-384
    • /
    • 2012
  • Utilizing artificial noise (AN) is a good means to guarantee security against eavesdropping in a multi-inputmulti-output system, where the AN is designed to lie in the null space of the legitimate receiver's channel direction information (CDI). However, imperfect CDI will lead to noise leakage at the legitimate receiver and cause significant loss in the achievable secrecy rate. In this paper, we consider a delayed feedback system, and investigate the impact of delayed CDI on security by using a transmit beamforming and AN scheme. By exploiting the Gauss-Markov fading spectrum to model the feedback delay, we derive a closed-form expression of the upper bound on the secrecy rate loss, where $N_t$ = 2. For a moderate number of antennas where $N_t$ > 2, two special cases, based on the first-order statistics of the noise leakage and large number theory, are explored to approximate the respective upper bounds. In addition, to maintain a constant signal-to-interferenceplus-noise ratio degradation, we analyze the corresponding delay constraint. Furthermore, based on the obtained closed-form expression of the lower bound on the achievable secrecy rate, we investigate an optimal power allocation strategy between the information signal and the AN. The analytical and numerical results obtained based on first-order statistics can be regarded as a good approximation of the capacity that can be achieved at the legitimate receiver with a certain number of antennas, $N_t$. In addition, for a given delay, we show that optimal power allocation is not sensitive to the number of antennas in a high signal-to-noise ratio regime. The simulation results further indicate that the achievable secrecy rate with optimal power allocation can be improved significantly as compared to that with fixed power allocation. In addition, as the delay increases, the ratio of power allocated to the AN should be decreased to reduce the secrecy rate degradation.

Channel Allocation Method for OFDMA Based Contiguous Resources Units with H-ARQ to Enhance Channel Throughput (H-ARQ가 적용된 OFDMA 기반 연접할당자원에 대한 전송률 향상을 위한 채널 할당 방법)

  • Kim, Sang-Hyun;Jung, Young-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.3
    • /
    • pp.386-391
    • /
    • 2011
  • CRU (contiguous resource unit) composed of adjacent OFDMA subcarriers is popularly adopted for recently developed cellular communication standards, e.g. IEEE 802.16e/m. If multiple CRUs having different SNR are assigned to a mobile station, and multiple packet streams are independently transmitted by using H-ARQ transmission, an achievable data rate can be varied according to the channel allocation method of re-transmission packets and new transmission packets. In this paper, the optimum channel allocation method for the above stated problem, and several sub-optimum channel allocation methods to reduce the computational complexity of the optimum allocation method are proposed. According to the simulation results, a sub-optimum allocation method assigning a CRU having good SNR to new transmission packet shows marginal performance loss compared with optimum method, however, the computational complexity can be significantly reduced.

LDBAS: Location-aware Data Block Allocation Strategy for HDFS-based Applications in the Cloud

  • Xu, Hua;Liu, Weiqing;Shu, Guansheng;Li, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.204-226
    • /
    • 2018
  • Big data processing applications have been migrated into cloud gradually, due to the advantages of cloud computing. Hadoop Distributed File System (HDFS) is one of the fundamental support systems for big data processing on MapReduce-like frameworks, such as Hadoop and Spark. Since HDFS is not aware of the co-location of virtual machines in the cloud, the default scheme of block allocation in HDFS does not fit well in the cloud environments behaving in two aspects: data reliability loss and performance degradation. In this paper, we present a novel location-aware data block allocation strategy (LDBAS). LDBAS jointly optimizes data reliability and performance for upper-layer applications by allocating data blocks according to the locations and different processing capacities of virtual nodes in the cloud. We apply LDBAS to two stages of data allocation of HDFS in the cloud (the initial data allocation and data recovery), and design the corresponding algorithms. Finally, we implement LDBAS into an actual Hadoop cluster and evaluate the performance with the benchmark suite BigDataBench. The experimental results show that LDBAS can guarantee the designed data reliability while reducing the job execution time of the I/O-intensive applications in Hadoop by 8.9% on average and up to 11.2% compared with the original Hadoop in the cloud.

Resource Allocation Schemes for Legacy OFDMA Systems with Two-Way DF Relay (양방향 복호전달 릴레이를 사용하는 레거시 OFDMA 시스템에서의 자원 할당 기법)

  • Seo, Jongpil;Han, Chulhee;Park, Seongho;Chung, Jaehak
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.10
    • /
    • pp.593-600
    • /
    • 2014
  • OFDMA systems solves frequency selective fading problem and provides improved performance by optimal allocation of subcarriers and transmit power. Two-way relay systems provide improved spectral efficiency compared to that of the conventional half-duplex relay using bidirectional communications. In legacy OFDMA system such as WiBro, two-way DF relay utilization causes pilot re-assignment and impossibility of channel estimation and decoding at relay nodes by self-interference. In this paper, resource allocation schemes for legacy OFDMA systems with two-way DF relay are proposed. The proposed schemes allocate subcarriers considering destinations nodes which are connected to relay nodes as individual nodes which are directly connected to a base station. Subsequently, the proposed schemes compensate bandwidth loss due to orthogonal allocations by overlapped allocating unused subcarriers at other noes. Numerical simulations show that the proposed resource allocation schemes provide improved performance compared with orthogonal allocation.

Threshold-based Filtering Buffer Management Scheme in a Shared Buffer Packet Switch

  • Yang, Jui-Pin;Liang, Ming-Cheng;Chu, Yuan-Sun
    • Journal of Communications and Networks
    • /
    • v.5 no.1
    • /
    • pp.82-89
    • /
    • 2003
  • In this paper, an efficient threshold-based filtering (TF) buffer management scheme is proposed. The TF is capable of minimizing the overall loss performance and improving the fairness of buffer usage in a shared buffer packet switch. The TF consists of two mechanisms. One mechanism is to classify the output ports as sctive or inactive by comparing their queue lengths with a dedicated buffer allocation factor. The other mechanism is to filter the arrival packets of inactive output ports when the total queue length exceeds a threshold value. A theoretical queuing model of TF is formulated and resolved for the overall packet loss probability. Computer simulations are used to compare the overall loss performance of TF, dynamic threshold (DT), static threshold (ST) and pushout (PO). We find that TF scheme is more robust against dynamic traffic variations than DT and ST. Also, although the over-all loss performance between TF and PO are close to each other, the implementation of TF is much simpler than the PO.

Adaptive Importance Channel Selection for Perceptual Image Compression

  • He, Yifan;Li, Feng;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3823-3840
    • /
    • 2020
  • Recently, auto-encoder has emerged as the most popular method in convolutional neural network (CNN) based image compression and has achieved impressive performance. In the traditional auto-encoder based image compression model, the encoder simply sends the features of last layer to the decoder, which cannot allocate bits over different spatial regions in an efficient way. Besides, these methods do not fully exploit the contextual information under different receptive fields for better reconstruction performance. In this paper, to solve these issues, a novel auto-encoder model is designed for image compression, which can effectively transmit the hierarchical features of the encoder to the decoder. Specifically, we first propose an adaptive bit-allocation strategy, which can adaptively select an importance channel. Then, we conduct the multiply operation on the generated importance mask and the features of the last layer in our proposed encoder to achieve efficient bit allocation. Moreover, we present an additional novel perceptual loss function for more accurate image details. Extensive experiments demonstrated that the proposed model can achieve significant superiority compared with JPEG and JPEG2000 both in both subjective and objective quality. Besides, our model shows better performance than the state-of-the-art convolutional neural network (CNN)-based image compression methods in terms of PSNR.

Compatibility between LTE Cellular Systems and WLAN (LTE 셀룰라 시스템과 무선랜의 양립성 분석)

  • Jo, Han-Shin
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.26 no.2
    • /
    • pp.171-178
    • /
    • 2015
  • 3GPP long-term evolution(LTE) band 2.3~2.4 GHz is adjacent to 2.4~2.5 GHz band for WLAN, and therefore compatibility study of the two systems is desirable. We propose a dynamic system simulation methodology to investigate the effect of WLAN interference on LTE systems. As capturing space/time/frequency changes in system parameters, the dynamic system simulation can exactly predict real system performance. Using the proposed methodology, we obtain LTE downlink throughput loss for the frequency separation between the two systems. Throughput loss under 1 % is obtained from guard band over 11 MHz(single channel allocation) or 10 MHz(three channel allocation).

Modeling of Propagation Interference and Channel Application Solution Suggestion In the UHF Band RFID Propagation Path (UHF 대역 RFID 전파경로에서의 전파간섭 모델링 및 채널 운용 방안 제안)

  • Moon, Young-Joo;Yeo, Seon-Mi;Jeon, Bu-Won;Roh, Hyoung-Hwan;Joung, Myoung-Sub;Oh, Ha-Ryoung;Seong, Yeong-Rak;Park, Jun-Seok
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.11
    • /
    • pp.2047-2053
    • /
    • 2008
  • Auto-ID industries and their services have been improved since decades ago, and radio frequency identification (RFID) has been contributing in many applications. Product management can be the foremost example. In our industrial experiences, RFID in ultra high frequency (UHF) band provides much longer interrogation ranges than that of 13.56MHz; many more applications exist thereby. There should be several interesting and useful ideas on UHF RFID; however, those ideas can be limited due to the inevitable environmental circumstances that restrict the interrogation range in shorten value. This paper discusses the propagation interference among different types of readers (e.g, mobile RFID readers in stationary reader zone) in dense-reader environment. In most cases, UHF RFIDs in Korea will be dependent on the UHF mobile RFIDs. In this sense, the UHF mobile users accidently move into the stationary reader's interrogation zone. This is serious problem. In this paper, we analyze propagation loss and propose the effective channel allocation scheme that can contribute developing less-invasive UHF RFID networks. The simulation and practical measurement process using the commercial CAD tools and measurement equipments are presented.

Modeling and Analysis of Burst Switching for Wireless Packet Data (무선 패킷 데이터를 위한 Burst switching의 모델링 및 분석)

  • Park, Kyoung-In;Lee, Chae Young
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.28 no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The third generation mobile communication needs to provide multimedia service with increased data rates. Thus an efficient allocation of radio and network resources is very important. This paper models the 'burst switching' as an efficient radio resource allocation scheme and the performance is compared to the circuit and packet switching. In burst switching, radio resource is allocated to a call for the duration of data bursts rather than an entire session or a single packet as in the case of circuit and packet switching. After a stream of data burst, if a packet does not arrive during timer2 value ($\tau_{2}$), the channel of physical layer is released and the call stays in suspended state. Again if a packet does not arrive for timerl value ($\tau_{1}$) in the suspended state, the upper layer is also released. Thus the two timer values to minimize the sum of access delay and queuing delay need to be determined. In this paper, we focus on the decision of $\tau_{2}$ which minimizes the access and queueing delay with the assumption that traffic arrivals follow Poison process. The simulation, however, is performed with Pareto distribution which well describes the bursty traffic. The computational results show that the delay and the packet loss probability by the burst switching is dramatically reduced compared to the packet switching.

Fault- Tolerant Tasking and Guidance of an Airborne Location Sensor Network

  • Wu, N.Eva;Guo, Yan;Huang, Kun;Ruschmann, Matthew C.;Fowler, Mark L.
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.3
    • /
    • pp.351-363
    • /
    • 2008
  • This paper is concerned with tasking and guidance of networked airborne sensors to achieve fault-tolerant sensing. The sensors are coordinated to locate hostile transmitters by intercepting and processing their signals. Faults occur when some sensor-carrying vehicles engaged in target location missions are lost. Faults effectively change the network architecture and therefore degrade the network performance. The first objective of the paper is to optimally allocate a finite number of sensors to targets to maximize the network life and availability. To that end allocation policies are solved from relevant Markov decision problems. The sensors allocated to a target must continue to adjust their trajectories until the estimate of the target location reaches a prescribed accuracy. The second objective of the paper is to establish a criterion for vehicle guidance for which fault-tolerant sensing is achieved by incorporating the knowledge of vehicle loss probability, and by allowing network reconfiguration in the event of loss of vehicles. Superior sensing performance in terms of location accuracy is demonstrated under the established criterion.