• Title/Summary/Keyword: Throughput Evaluation

Search Result 386, Processing Time 0.031 seconds

Performance Evaluation of VHF Digital Link Mode 3 System (VHF Digital Link 모드 3 시스템의 성능 평가)

  • Bae, Joong-Won;Nam, Gi-Wook;Kwak, Jae-Min;Park, Ki-Sik;Cho, Sung-Eon
    • Journal of Advanced Navigation Technology
    • /
    • v.9 no.2
    • /
    • pp.156-163
    • /
    • 2005
  • In this paper, we analyzed the performance of VDL mode 3 system model whose specification is defined by ICAO(International Civil Aviation Organization). For performance evaluation, we obtained BER(Bit Error Rate), transmission delay time, burst retransmission rate and throughput. From the analysis result, we could explicitly define relationships among BER, transmission delay time, throughput and burst restransmission rate. In addition, it became known that V/D retransmission rate and throughput are closely related in down link channel.

  • PDF

Implementation of Z-Factor Statistics for Performance Evaluation of Quality Innovation in the High Throughput Process (High Throughput 프로세스에서 품질혁신의 성능평가를 위한 Z-Factor의 적용방안)

  • Choi, Sung-Woon
    • Journal of the Korea Safety Management & Science
    • /
    • v.15 no.1
    • /
    • pp.293-301
    • /
    • 2013
  • The purpose of this study is to introduce the limit of previously used six sigma quality process evaluation metrics, $Z_{st}$ and $P_{pk}$, and a solution to overcome this drawback by using a metric based on performance evaluation of Z-factor quality innovation. Case analysis on projects from national six sigma contest from 2011 to 2012 is performed and literature review on new drug development HTS (High Throughput Screening) is used to propose innovative performance evaluation metrics. This research shows that experimental study on six sigma evaluation metric, $Z_{st}$ and $P_{pk}$, have no significance difference between industrial type (Manufacturing, Semi-Public Institute, Public Institute) and CTQ type (Product Technology Type CTQ, Process Technology Type CTQ). Following discovery characterize this quality improvement as fixed target type project. As newly developed moving target type of quality innovation performance metric Z-Factor is used for evaluating experimental study, hypothetical analysis suggests that $Z_{st}$ and $P_{pk}$ share different relationship or even show reciprocal relationship. Constraints of the study are relatively small sample size of only 37 projects from past 2 years and conflict on having interview and communication with six sigma quality practitioner for qualitative experimental study. Both moving target type six sigma innovation project and fixed target type improvement project or quality circle enables efficient ways for a better understanding and quality practitioner use by applying quality innovation performance metric. Downside of fixed target type quality performance evaluation metric, $Z_{st}$ and $P_{pk}$, is presented through experimental study. In contrast, advantage of this study is that high throughput requiring product technology, process technology and quantum leap typed innovation effect is evaluated based on precision and accuracy and Z-Factor that enables relative comparison between enterprises is proposed and implemented.

Dynamic Prime Chunking Algorithm for Data Deduplication in Cloud Storage

  • Ellappan, Manogar;Abirami, S
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1342-1359
    • /
    • 2021
  • The data deduplication technique identifies the duplicates and minimizes the redundant storage data in the backup server. The chunk level deduplication plays a significant role in detecting the appropriate chunk boundaries, which solves the challenges such as minimum throughput and maximum chunk size variance in the data stream. To provide the solution, we propose a new chunking algorithm called Dynamic Prime Chunking (DPC). The main goal of DPC is to dynamically change the window size within the prime value based on the minimum and maximum chunk size. According to the result, DPC provides high throughput and avoid significant chunk variance in the deduplication system. The implementation and experimental evaluation have been performed on the multimedia and operating system datasets. DPC has been compared with existing algorithms such as Rabin, TTTD, MAXP, and AE. Chunk Count, Chunking time, throughput, processing time, Bytes Saved per Second (BSPS) and Deduplication Elimination Ratio (DER) are the performance metrics analyzed in our work. Based on the analysis of the results, it is found that throughput and BSPS have improved. Firstly, DPC quantitatively improves throughput performance by more than 21% than AE. Secondly, BSPS increases a maximum of 11% than the existing AE algorithm. Due to the above reason, our algorithm minimizes the total processing time and achieves higher deduplication efficiency compared with the existing Content Defined Chunking (CDC) algorithms.

Contiki-NG-based IEEE 802.15.4 TSCH Throughput Evaluation (Contiki-NG 기반 IEEE 802.15.4 TSCH 처리량 분석)

  • Lee, Sol-Bee;Kim, Eui-Jik;Lim, Yongseok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.577-578
    • /
    • 2018
  • In this paper, we evaluate the throughput performance of IEEE 802.15.4 Time Slotted Channel Hopping (TSCH) tree network using Contiki-NG operating system. We build a virtual simulation environment to compare the throughput performance of various IEEE 802.15.4 TSCH networks according to the changes in the number of nodes and the hop counts. The simulation results show that the throughput increases as the number of nodes increase while it decreases as the hop counts increase.

  • PDF

Development of Analysis Model and Improvement of Evaluation Method of LOS for Freeway Merging Areas (고속도로 합류부 분석모형 개발 및 서비스수준 평가 기법 개선 연구)

  • Lee, Seung-Jun;Park, Jae-Beom
    • Journal of Korean Society of Transportation
    • /
    • v.24 no.7 s.93
    • /
    • pp.115-128
    • /
    • 2006
  • The analytic methodology of a merging area in KHCM(2004) supposes that congestion nay occur when traffic demand is more than capacity However, in many cases, congestion on merging area occurs when summation of traffic demand of main line and ramp is less than capacity, and in present methodology analysis of how main line and ramp flow effect on congestion occurrence is difficult. In this study, the model that is able to estimate traffic flow condition on merging area in accordance with the combination of main line and ramp demand flow is developed. Main characteristic of the model is estimation of maximum possible throughput rate and maximum throughput rate according to the combination of main line and ramp demand flow. Through the estimation of maximum possible throughput rate and maximum throughput rate. it was Possible to predict whether congestion would occur or not and how much maximum throughput rate and congestion would be on merging area. On one hand, in present LOS evaluation methodology on merging area, congestion state is determined as un-congested flow if demand flow is less than capacity. Therefore, to establish more reasonable In evaluation method, new criterion of LOS evaluation on merging area was searched based on the model of this study.

Dynamic Resource Allocation of Random Access for MTC Devices

  • Lee, Sung-Hyung;Jung, So-Yi;Kim, Jae-Hyun
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.546-557
    • /
    • 2017
  • In a long term evolution-advanced (LTE-A) system, the traffic overload of machine type communication devices is a challenge because too many devices attempt to access a base station (BS) simultaneously in a short period of time. We discuss the challenge of the gap between the theoretical maximum throughput and the actual throughput. A gap occurs when the BS cannot change the number of preambles for a random access channel (RACH) until multiple numbers of RACHs are completed. In addition, a preamble partition approach is proposed in this paper that uses two groups of preambles to reduce this gap. A performance evaluation shows that the proposed approach increases the average throughput. For 100,000 devices in a cell, the throughput is increased by 29.7% to 114.4% and 23.0% to 91.3% with uniform and Beta-distributed arrivals of devices, respectively.

Performance Evaluation on Throughput of a Petri Net Modeled Food Business

  • Naoki Nakayama;Shingo Yamaguchi;Ge, Qi-Wei;Minoru Tanaka
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.661-664
    • /
    • 2000
  • A workflow expresses the How of persons and things related to a business. To improve efficiency of a business, it is important to grasp and evaluate the actual situation of the current business. Till now, researches on workflows have been done almost on once business and these results can not be simply applied to food business. Besides, it is also important how to evaluate a food business workflow with a specific standard. In this paper, we propose a modeling method of food business by using hierarchical Petri net. Then we propose a concept, called throughput, as a standard to evaluate the workflows. Finally we show a method how to compute throughput and meanwhile apply a Petri net tool, Design/CPN, to do simulation of computing throughput. Our simulation result shows the modeling method and computation method of throughput are reasonable and useful.

  • PDF

A Throughput Computation Method for Throughput Driven Floorplan (처리량 기반 평면계획을 위한 처리량 계산 방법)

  • Kang, Min-Sung;Rim, Chong-Suck
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.12
    • /
    • pp.18-24
    • /
    • 2007
  • As VLSI technology scales to nano-meter order, relatively increasing global wire-delay has added complexity to system design. Global wire-delay could be reduced by inserting pipeline-elements onto wire but it should be coupled with LIP(Latency Intensive Protocol) to have correct system timing. This combination however, drops the throughput although it ensures system functionality. In this paper, we propose a computation method useful for minimizing throughput deterioration when pipeline-elements are inserted to reduce global wire-delay. We apply this method while placing blocks in the floorplanning stage. When the necessary for this computation is reflected on the floorplanning cost function, the throughput increases by 16.97% on the average when compared with the floorplanning that uses the conventional heuristic throughput-evaluation-method.

Throughput and Delay Analysis of a Reliable Cooperative MAC Protocol in Ad Hoc Networks

  • Jang, Jaeshin;Kim, Sang Wu;Wie, Sunghong
    • Journal of Communications and Networks
    • /
    • v.14 no.5
    • /
    • pp.524-532
    • /
    • 2012
  • In this paper, we present the performance evaluation of the reliable cooperative media access control (RCO-MAC) protocol, which has been proposed in [1] by us in order to enhance system throughput in bad wireless channel environments. The performance of this protocol is evaluated with computer simulation as well as mathematical analysis in this paper. The system throughput, two types of average delays, average channel access delay, and average system delay, which includes the queuing delay in the buffer, are used as performance metrics. In addition, two different traffic models are used for performance evaluation: The saturated traffic model for computing system throughput and average channel access delay, and the exponential data generation model for calculating average system delay. The numerical results show that the RCO-MAC protocol proposed by us provides over 20% more system throughput than the relay distributed coordination function (rDCF) scheme. The numerical results show that the RCO-MAC protocol provides a slightly higher average channel access delay over a greater number of source nodes than the rDCF. This is because a greater number of source nodes provide more opportunities for cooperative request to send (CRTS) frame collisions and because the value of the related retransmission timer is greater in the RCO-MAC protocol than in the rDCF protocol. The numerical results also confirm that the RCO-MAC protocol provides better average system delay over the whole gamut of the number of source nodes than the rDCF protocol.