• 제목/요약/키워드: Data throughput

검색결과 1,386건 처리시간 0.027초

무선랜에서 연속적인 전송성공 횟수를 고려한 DCF 성능분석 (Performance Analysis on DCF Considering the Number of Consecutive Successful Transmission in Wireless LAN)

  • 임석구
    • 한국산학기술학회논문지
    • /
    • 제9권2호
    • /
    • pp.388-394
    • /
    • 2008
  • 본 논문에서는 IEEE 802.11 WLAN(Wireless LAN)의 MAC(Medium Access Control)인 DCF(Distributed Coordination Function)의 성능을 개선하는 알고리즘을 제안하고 이를 시뮬레이션으로 분석한다. IEEE 802.11 WLAN의 MAC에서는 데이터 전송을 제어하기 위한 방법으로 DCF와 PCF(Point Coordination Function)를 사용하며, DCF의 경우 CSMA/CA(Carrier Sense Multiple Access with Collision Avoidance)를 기반으로 한다. DCF는 경쟁 스테이션이 적은 상황에서는 비교적 우수한 성능을 보이나 경쟁 스테이션의 수가 많은 경우 처리율, 지연 관점에서 성능이 저하되는 문제점이 있다. 본 논문에서는 패킷 전송 후 충돌이 발생하면 윈도우 값을 최대 CW로 증가시키고 패킷의 정상적인 전송 후에는 윈도우 값을 서서히 감소함으로써 현재 WLAN의 망 상태정보를 계속 활용함으로써 패킷 충돌 확률을 낮추는 알고리즘을 제안한다. 제안하는 알고리즘의 효율성을 입증하기 위해 시뮬레이션을 수행하여 그 타당성을 제시하였다.

Improving the Performance of AODV(-PGB) based on Position-based Routing Repair Algorithm in VANET

  • Jung, Sung-Dae;Lee, Sang-Sun;Oh, Hyun-Seo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제4권6호
    • /
    • pp.1063-1079
    • /
    • 2010
  • Vehicle ad hoc networks (VANET) are one of the most important technologies to provide various ITS services. While VANET requires rapid and reliable transmission, packet transmission in VANET is unstable because of high mobility. Many routing protocols have been proposed and assessed to improve the efficiency of VANET. However, topology-based routing protocols generate heavy overhead and long delay, and position-based routing protocols have frequent packet loss due to inaccurate node position. In this paper, we propose a position-based routing repair algorithm to improve the efficiency of VANET. This algorithm is proposed based on the premise that AODV (-PGB) can be used effectively in VANET, if the discovery, maintenance and repair mechanism of AODV is optimized for the features of VANET. The main focus of this algorithm is that the relay node can determine whether its alternative node exits and judge whether the routing path is disconnected. If the relay node is about to swerve from the routing path in a multi-hop network, the node recognizes the possibility of path loss based on a defined critical domain. The node then transmits a handover packet to the next hop node, alternative nodes and previous node. The next node repairs the alternative path before path loss occurs to maintain connectivity and provide seamless service. We simulated protocols using both the ideal traffic model and the realistic traffic model to assess the proposed algorithm. The result shows that the protocols that include the proposed algorithm have fewer path losses, lower overhead, shorter delay and higher data throughput compared with other protocols in VANET.

Bandwidth Management of WiMAX Systems and Performance Modeling

  • Li, Yue;He, Jian-Hua;Xing, Weixi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제2권2호
    • /
    • pp.63-81
    • /
    • 2008
  • WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network’s performance compared to WCPS.

Analysis of Energy-Efficiency in Ultra-Dense Networks: Determining FAP-to-UE Ratio via Stochastic Geometry

  • Zhang, HongTao;Yang, ZiHua;Ye, Yunfan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권11호
    • /
    • pp.5400-5418
    • /
    • 2016
  • Femtocells are envisioned as a key solution to embrace the ever-increasing high data rate and thus are extensively deployed. However, the dense and random deployments of femtocell access points (FAPs) induce severe intercell inference that in turn may degrade the performance of spectral efficiency. Hence, unrestrained proliferation of FAPs may not acquire a net throughput gain. Besides, given that numerous FAPs deployed in ultra-dense networks (UDNs) lead to significant energy consumption, the amount of FAPs deployed is worthy of more considerations. Nevertheless, little existing works present an analytical result regarding the optimal FAP density for a given User Equipment (UE) density. This paper explores the realistic scenario of randomly distributed FAPs in UDN and derives the coverage probability via Stochastic Geometry. From the analytical results, coverage probability is strictly increasing as the FAP-to-UE ratio increases, yet the growing rate of coverage probability decreases as the ratio grows. Therefore, we can consider a specific FAP-to-UE ratio as the point where further increasing the ratio is not cost-effective with regards to the requirements of communication systems. To reach the optimal FAP density, we can deploy FAPs in line with peak traffic and randomly switch off FAPs to keep the optimal ratio during off-peak hours. Furthermore, considering the unbalanced nature of traffic demands in the temporal and spatial domain, dynamically and carefully choosing the locations of active FAPs would provide advantages over randomization. Besides, with a huge FAP density in UDN, we have more potential choices for the locations of active FAPs and this adds to the demand for a strategic sleeping policy.

User Bandwidth Demand Centric Soft-Association Control in Wi-Fi Networks

  • Sun, Guolin;Adolphe, Sebakara Samuel Rene;Zhang, Hangming;Liu, Guisong;Jiang, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권2호
    • /
    • pp.709-730
    • /
    • 2017
  • To address the challenge of unprecedented growth in mobile data traffic, ultra-dense network deployment is a cost efficient solution to offload the traffic over some small cells. The overlapped coverage areas of small cells create more than one candidate access points for one mobile user. Signal strength based user association in IEEE 802.11 results in a significantly unbalanced load distribution among access points. However, the effective bandwidth demand of each user actually differs vastly due to their different preferences for mobile applications. In this paper, we formulate a set of non-linear integer programming models for joint user association control and user demand guarantee problem. In this model, we are trying to maximize the system capacity and guarantee the effective bandwidth demand for each user by soft-association control with a software defined network controller. With the fact of NP-hard complexity of non-linear integer programming solver, we propose a Kernighan Lin Algorithm based graph-partitioning method for a large-scale network. Finally, we evaluated the performance of the proposed algorithm for the edge users with heterogeneous bandwidth demands and mobility scenarios. Simulation results show that the proposed adaptive soft-association control can achieve a better performance than the other two and improves the individual quality of user experience with a little price on system throughput.

경량 블록암호 LEA용 암·복호화 IP 설계 (Design of Encryption/Decryption IP for Lightweight Encryption LEA)

  • 손승일
    • 인터넷정보학회논문지
    • /
    • 제18권5호
    • /
    • pp.1-8
    • /
    • 2017
  • LEA(Lightweight Encryption Algorithm)는 2013년 국가보안연구소(NSRI)에서 빅데이터 처리, 클라우드 서비스 및 모바일 환경에 적합하도록 개발되었다. LEA는 128비트 메시지 블록 크기와 128비트, 192비트 및 256비트 키(Key)에 대한 암호화 방식을 규정하고 있다. 본 논문에서는 128비트 메시지를 암호화하고 복호화할 수 있는 LEA 블록 암호 알고리즘을 Verilog-HDL을 사용하여 설계하였다. 설계된 LEA 암.복호화 IP는 Xilinx Vertex5 디바이에서 약 164MHz에서 동작하였다. 128비트 키 모드에서 최대 처리율은 874Mbps이며, 192비트 키 모드에서는 749Mbps 그리고 256비트 키 모드에서는 656Mbps이다. 본 논문에서 설계된 암호 프로세서 IP는 스마트 카드, 인터넷 뱅킹, 전자상거래 및 IoT (Internet of Things) 등과 같은 모바일 분야의 보안 모듈로 응용이 가능할 것으로 사료된다.

OFDMA 다중률 비디오 멀티캐스트 전송에서 스케줄링 방식의 장기적 성능 평가 (Long-Term Performance Evaluation of Scheduling Disciplines in OFDMA Multi-Rate Video Multicast Transmission)

  • 홍진표;한민규
    • 정보과학회 논문지
    • /
    • 제43권2호
    • /
    • pp.246-255
    • /
    • 2016
  • OFDMA는 주파수와 시간 축에 따라 융통성 있는 자원 할당이 가능하고, 적응적 변조와 코딩이 가능하기 때문에 다중률 멀티캐스트 전송에 적합하다. 계층적 코딩과 달리 MDC (multiple description coding)는 비디오 스트림을 서브 스트림으로 분해와 재조립이 용이하며, 수신율에 비례하여 비디오 품질도 증가하는 특성을 가지고 있다. OFDMA 무선 또는 이동통신망에서 비디오를 다중률로 멀티캐스트 전송할 때 자원 할당과 전송률에 관한 수학적 모델을 제시하고, 사용자가 느끼는 비디오 품질 인덱스인 MOS (mean opinion score)를 최대화 혹은 비례적 공평성을 극대화하는 스케줄링 방식에 대해, 평균값 분석 방법론을 통해 장기적 관점에서 비교 분석한다. 또한, 제한된 자원 내에서 일부 사용자에게 최저 품질을 보장하는 pruning 알고리즘을 제시하고, 비디오 세션 전 기간 또는 일부 기간에 최적으로 서브 스트림을 분할할 수 있음을 보인다.

Fermat의 소정리를 응용한 IDEA 암호 알고리즘의 고속 하드웨어 설계 (A High-Speed Hardware Design of IDEA Cipher Algorithm by Applying of Fermat′s Theorem)

  • 최영민;권용진
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제7권6호
    • /
    • pp.696-702
    • /
    • 2001
  • 본 논문에서는 DES 보다 암호학적 강도가 뛰어난 것으로 알려져 있는 IDEA 알고리즘에서 가장 많은 계산량이 요구되는 모듈러 2$^{16}$ +1에 대한 곱셈의 역원 연산을 페르마의 소정리를 응용하여 IEDA의 처리 속도를 향상시키는 방법을 제안한다. 본 논문에서 제안하고 있는 페르마 소정리를 응용한 모듈러 2$^{16}$ +1에 대한 곱셈의 역원 연산 방식은 기존의 확장 유클리드 알고리즘을 적용한 방식보다 필요한 연산 횟수를 약 50%정도 감소시킨다. 제안한 곱셈의 역원 방식을 적용하여 단일 라운드 반복 구조로 설계한 IDEA 하드웨어의 최대 동작 주파수는 20 MHz이고 게이트 수는 118,774 gate이며 처리 속도는 116 Mbits/sec이다. 동일한 단일 라운드 반복 구조로 설계된 H.Bonnenberg에 의한 기존의 연구보다 처리속도가 약 2배정도 빠르다. 이것은 본 논문에서 제안한 모듈러 2$^{16}$ +1에 대한 곱셈의 역원 연산 방식이 속도면에서 효율적임을 나타내고 있다.

  • PDF

Efficient Post-Quantum Secure Network Coding Signatures in the Standard Model

  • Xie, Dong;Peng, HaiPeng;Li, Lixiang;Yang, Yixian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권5호
    • /
    • pp.2427-2445
    • /
    • 2016
  • In contrast to traditional "store-and-forward" routing mechanisms, network coding offers an elegant solution for achieving maximum network throughput. The core idea is that intermediate network nodes linearly combine received data packets so that the destination nodes can decode original files from some authenticated packets. Although network coding has many advantages, especially in wireless sensor network and peer-to-peer network, the encoding mechanism of intermediate nodes also results in some additional security issues. For a powerful adversary who can control arbitrary number of malicious network nodes and can eavesdrop on the entire network, cryptographic signature schemes provide undeniable authentication mechanisms for network nodes. However, with the development of quantum technologies, some existing network coding signature schemes based on some traditional number-theoretic primitives vulnerable to quantum cryptanalysis. In this paper we first present an efficient network coding signature scheme in the standard model using lattice theory, which can be viewed as the most promising tool for designing post-quantum cryptographic protocols. In the security proof, we propose a new method for generating a random lattice and the corresponding trapdoor, which may be used in other cryptographic protocols. Our scheme has many advantages, such as supporting multi-source networks, low computational complexity and low communication overhead.

Validation of One-Step Real-Time RT-PCR Assay in Combination with Automated RNA Extraction for Rapid Detection and Quantitation of Hepatitis C Virus RNA for Routine Testing in Clinical Specimens

  • KIM BYOUNG-GUK;JEONG HYE-SUNG;BAEK SUN-YOUNG;SHIN JIN-HO;KIM JAE-OK;MIN KYUNG-IL;RYU SEUNG-REL;MIN BOK-SOON;KIM DO-KEUN;JEONG YONG-SEOK;PARK SUE-NIE
    • Journal of Microbiology and Biotechnology
    • /
    • 제15권3호
    • /
    • pp.595-602
    • /
    • 2005
  • A one-step real-time quantitative RT-PCR assay in combination with automated RNA extraction was evaluated for routine testing of HCV RNA in the laboratory. Specific primers and probes were developed to detect 302 bp on 5'-UTR of HCV RNA. The assay was able to quantitate a dynamic linear range of $10^7-10^1$ HCV RNA copies/reaction ($R^2=0.997$). The synthetic HCV RNA standard of $1.84{\pm}0.1\;(mean{\pm}SD)$ copies developed in this study corresponded to 1 international unit (IU) of WHO International Standard for HCV RNA (96/790 I). The detection limit of the assay was 3 RNA copies/reaction (81 IU/ml) in plasma samples. The assay was comparable to the Amplicor HCV Monitor (Monitor) assay with correlation coefficient r=0.985, but was more sensitive than the Monitor assay. The assay could be completed within 3 h from RNA extraction to detection and data analysis for up to 32 samples. It allowed rapid RNA extraction, detection, and quantitation of HCV RNA in plasma samples. The method provided sufficient sensitivity and reproducibility and proved to be fast and labor-saving, so that it was suitable for high throughput HCV RNA test.