• Title/Summary/Keyword: system throughput

Search Result 1,748, Processing Time 0.028 seconds

Thread Block Scheduling for GPGPU based on Fine-Grained Resource Utilization (상세 자원 이용률에 기반한 병렬 가속기용 스레드 블록 스케줄링)

  • Bahn, Hyokyung;Cho, Kyungwoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.49-54
    • /
    • 2022
  • With the recent widespread adoption of general-purpose GPUs (GPGPUs) in cloud systems, maximizing the resource utilization through multitasking in GPGPU has become an important issue. In this article, we show that resource allocation based on the workload classification of computing-bound and memory-bound is not sufficient with respect to resource utilization, and present a new thread block scheduling policy for GPGPU that makes use of fine-grained resource utilizations of each workload. Unlike previous approaches, the proposed policy reduces scheduling overhead by separating profiling and scheduling, and maximizes resource utilizations by co-locating workloads with different bottleneck resources. Through simulations under various virtual machine scenarios, we show that the proposed policy improves the GPGPU throughput by 130.6% on average and up to 161.4%.

Server State-Based Weighted Load Balancing Techniques in SDN Environments (SDN 환경에서 서버 상태 기반 가중치 부하분산 기법)

  • Kyoung-Han, Lee;Tea-Wook, Kwon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1039-1046
    • /
    • 2022
  • After the COVID-19 pandemic, the spread of the untact culture and the Fourth Industrial Revolution, which generates various types of data, generated so much data that it was not compared to before. This led to higher data throughput, revealing little by little the limitations of the existing network system centered on vendors and hardware. Recently, SDN technology centered on users and software that can overcome these limitations is attracting attention. In addition, SDN-based load balancing techniques are expected to increase efficiency in the load balancing area of the server cluster in the data center, which generates and processes vast and diverse data. Unlike existing SDN load distribution studies, this paper proposes a load distribution technique in which a controller checks the state of a server according to the occurrence of an event rather than periodic confirmation through a monitoring technique and allocates a user's request by weighting it according to a load ratio. As a result of the desired experiment, the proposed technique showed a better equal load balancing effect than the comparison technique, so it is expected to be more effective in a server cluster in a large and packet-flowing data center.

Analysis of Efficiency and Productivity for Major Korean Seaports using PCA-DEA model (PCA-DEA 모델을 이용한 국내 주요항만의 효율성과 생산성 분석에 관한 연구)

  • Pham, Thi Quynh Mai;Kim, Hwayoung
    • Journal of Korea Port Economic Association
    • /
    • v.38 no.2
    • /
    • pp.123-138
    • /
    • 2022
  • Korea has been huge investments in its port system, annually upgrading its infrastructure to turn the ports into Asian hub port. However, while Busan port is ranked fifth globally for container throughput, Other Korean ports are ranked much lower. This article applies Data Envelopment Analysis (DEA) and Malmquist Productivity Index (MPI) to evaluate selected major Korean seaports' operational efficiency and productivity from 2010 to 2018. It further integrates Principal Component Analysis (PCA) into DEA, with the PCA-DEA combined model strengthening the basic DEA results, as the discriminatory power weakens when the variable number exceeds the number of Decision Making Units(DMU). Meanwhile, MPI is applied to measure the seaports' productivity over the years. The analyses generate efficiency and productivity rankings for Korean seaports. The results show that except for Gwangyang and Ulsan port, none of the selected seaports is currently efficient enough in their operations. The study also indicates that technological progress has led to impactful changes in the productivity of Korean seaports.

Integrating Resilient Tier N+1 Networks with Distributed Non-Recursive Cloud Model for Cyber-Physical Applications

  • Okafor, Kennedy Chinedu;Longe, Omowunmi Mary
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2257-2285
    • /
    • 2022
  • Cyber-physical systems (CPS) have been growing exponentially due to improved cloud-datacenter infrastructure-as-a-service (CDIaaS). Incremental expandability (scalability), Quality of Service (QoS) performance, and reliability are currently the automation focus on healthy Tier 4 CDIaaS. However, stable QoS is yet to be fully addressed in Cyber-physical data centers (CP-DCS). Also, balanced agility and flexibility for the application workloads need urgent attention. There is a need for a resilient and fault-tolerance scheme in terms of CPS routing service including Pod cluster reliability analytics that meets QoS requirements. Motivated by these concerns, our contributions are fourfold. First, a Distributed Non-Recursive Cloud Model (DNRCM) is proposed to support cyber-physical workloads for remote lab activities. Second, an efficient QoS stability model with Routh-Hurwitz criteria is established. Third, an evaluation of the CDIaaS DCN topology is validated for handling large-scale, traffic workloads. Network Function Virtualization (NFV) with Floodlight SDN controllers was adopted for the implementation of DNRCM with embedded rule-base in Open vSwitch engines. Fourth, QoS evaluation is carried out experimentally. Considering the non-recursive queuing delays with SDN isolation (logical), a lower queuing delay (19.65%) is observed. Without logical isolation, the average queuing delay is 80.34%. Without logical resource isolation, the fault tolerance yields 33.55%, while with logical isolation, it yields 66.44%. In terms of throughput, DNRCM, recursive BCube, and DCell offered 38.30%, 36.37%, and 25.53% respectively. Similarly, the DNRCM had an improved incremental scalability profile of 40.00%, while BCube and Recursive DCell had 33.33%, and 26.67% respectively. In terms of service availability, the DNRCM offered 52.10% compared with recursive BCube and DCell which yielded 34.72% and 13.18% respectively. The average delays obtained for DNRCM, recursive BCube, and DCell are 32.81%, 33.44%, and 33.75% respectively. Finally, workload utilization for DNRCM, recursive BCube, and DCell yielded 50.28%, 27.93%, and 21.79% respectively.

Performance Analysis of Antenna Polarization Diversity on LTE 2×2 MIMO in Indoor Environment (실내 환경에서 LTE 2×2 MIMO 기술의 안테나 편파 다이버서티 성능 분석)

  • Nguyen, Duc T.;Devi, Ningombam Devarani;Shin, Seokjoo
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.1
    • /
    • pp.7-21
    • /
    • 2017
  • Multiple antenna techniques employed in fourth generation mobile communication systems are affected on their performance mostly by transmission environments and antenna configurations. The performance of the indoor LTE(Long-term Evolution) MIMO(multiple input multiple output) has been rigorously evaluated with considering various diversity transmission schemes and propagation conditions in the paper. Specifically, MAC TP(medium access control throughput) and LTE system parameters related to the MIMO technique are analyzed for several indoor propagation conditions. The performance comparison between multiple antenna diversity mode and single antenna mode has been derived as well. The results performed in the paper give the guideline on antenna configurations of polarization diversity in LTE 2×2 MIMO for various indoor channel environments, and possibly are exploited by network operators and antenna manufacturers.

Enhanced Biomass Productivity of Freshwater microalga, Parachlorella kessleri for Fixation of Atmospheric CO2 Using Optimal Culture Conditions (최적 배양 조건을 이용한 CO2 제거 목적의 담수 미세조류 Parachlorella kessleri의 바이오매스 생산성 향상)

  • Z-Hun Kim;Sun Woo Hong;Jinu Kim;Byungrak Son;Mi-Kyung Kim;Yong Hwan Kim;Jin Hyun Seol;Su-Hwan Cheon
    • Journal of Marine Bioscience and Biotechnology
    • /
    • v.16 no.1
    • /
    • pp.36-44
    • /
    • 2024
  • This study attempted to improve the growth of the freshwater microalgae, Parachlorella kessleri, through the sequential optimization of culture conditions. This attempt aimed to enhance the microalgae's ability to fixate atmospheric CO2. Culture temperature and light intensity appropriate for microalgal growth were scanned using a high-throughput photobioreactor system. The supplied air flow rate varied from 0.05 to 0.3 vvm, and its effect on the growth rate of P. kessleri was determined. Next, sodium phosphate buffer was added to the culture medium (BG11) to enhance CO2 fixation by increasing the availability of CO2(HCO3-) in the culture medium. The results indicated that optimal culture temperature and light intensity were 20℃-25℃ and 300 μE/m2/s, respectively. Growth rates of P. kessleri under various air flow rates highly depended on the increase of the culture's flow rate and pH which determines CO2 availability. Adding sodium phosphate buffer to BG11 to maintain a constant neutral pH (7.0) improved microalgal growth compared to control conditions (BG11 without sodium phosphate). These results indicate that the CO2 fixation rate in the air could be enhanced via the sequential optimization of microalgal culture conditions.

Performance Evaluation of a Dynamic Bandwidth Allocation Algorithm with providing the Fairness among Terminals for Ethernet PON Systems (단말에 대한 공정성을 고려한 이더넷 PON 시스템의 동적대역할당방법의 성능분석)

  • Park Ji-won;Yoon Chong-ho;Song Jae-yeon;Lim Se-youn;Kim Jin-hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11B
    • /
    • pp.980-990
    • /
    • 2004
  • In this paper, we propose the dynamic bandwidth allocation algorithm for the IEEE802.3ah Ethernet Passive Optical Network(EPON) system to provide the fairness among terminals, and evaluate the delay-throughput performance by simulation. For the conventional EPON systems, an Optical Line Termination (OLT) schedules the upstream bandwidth for each Optical Network Unit (ONU), based on its buffer state. This scheme can provide a fair bandwidth allocation for each ONU. However, it has a critical problem that it does not guarantee the fair bandwidth among terminals which are connected to ONUs. For an example, we assume that the traffic from a greedy terminal increases at a time. Then, the buffer state of its ONU is instantly reported to the OLT, and finally the OW can get more bandwidth. As a result, the less bandwidth is allocated to the other ONUs, and thus the transfer delay of terminals connected to the ONUs gets inevitably increased. Noting that this unfairness problem exists in the conventional EPON systems, we propose a fair bandwidth allocation scheme by OLT with considering the buffer state of ONU as welt as the number of terminals connected it. For the performance evaluation, we develop the EPON simulation model with SIMULA simulation language. From the result of the throughput-delay performance and the dynamics of buffer state along time for each terminal and ONU, respectively, one can see that the proposed scheme can provide the fairness among not ONUs but terminals. Finally, it is worthwhile to note that the proposed scheme for the public EPON systems might be an attractive solution for providing the fairness among subscriber terminals.

A Disk Group Commit Protocol for Main-Memory Database Systems (주기억 장치 데이타베이스 시스템을 위한 디스크 그룹 완료 프로토콜)

  • 이인선;염헌영
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.516-526
    • /
    • 2004
  • Main-Memory DataBase(MMDB) system where all the data reside on the main memory shows tremendous performance boost since it does not need any disk access during the transaction processing. Since MMDB still needs disk logging for transaction commit, it has become another bottleneck for the transaction throughput and the commit protocol should be examined carefully. There have been several attempts to reduce the logging overhead. The pre-commit and group commit are two well known techniques which do not require additional hardware. However, there has not been any research to analyze their effect on MMDB system. In this paper, we identify the possibility of deadlock resulting from the group commit and propose the disk group commit protocol which can be readily deployed. Using extensive simulation, we have shown that the group commit is effective on improving the MMDB transaction performance and the proposed disk group commit almost always outperform carefully tuned group commit. Also, we note that the pre-commit does not have any effect when used alone but shows some improvement if used in conjunction with the group commit.

An Optimal Space Time Coding Algorithm with Zero Forcing Method in Underwater Channel (수중통신에서 Zero Forcing기법을 이용한 최적의 시공간 부호화 알고리즘)

  • Kwon, Hae-Chan;Park, Tae-Doo;Chun, Seung-Yong;Lee, Sang-Kook;Jung, Ji-Won
    • Journal of Navigation and Port Research
    • /
    • v.38 no.4
    • /
    • pp.349-356
    • /
    • 2014
  • In the underwater communication, the performance of system is reduced because of the inter-symbol interference occur by the multi-path. In the recent years, to deal with poor channel environment and improve the throughput, the efficient concatenated structure of equalization, channel codes and Space Time Codes has been studied as MIMO system in the underwater communication. Space Time Codes include Space Time Block Codes and Space Time Trellis Codes in underwater communication. Space Time Trellis Codes are optimum for equalization and channel codes among the Space Time Codes to apply in the MIMO environment. Therefore, in this paper, turbo pi codes are used for the outer code to efficiently transmit in the multi-path channel environment. The inner codes consist of Space Time Trellis Codes with transmission diversity and coding gain in the MIMO system. And Zero Forcing method is used to remove inter-symbol interference. Finally, the performance of this model is simulated in the underwater channel.

Feasibility study of the beating cancellation during the satellite vibration test

  • Bettacchioli, Alain
    • Advances in aircraft and spacecraft science
    • /
    • v.5 no.2
    • /
    • pp.225-237
    • /
    • 2018
  • The difficulties of satellite vibration testing are due to the commonly expressed qualification requirements being incompatible with the limited performance of the entire controlled system (satellite + interface + shaker + controller). Two features cause the problem: firstly, the main satellite modes (i.e., the first structural mode and the high and low tank modes) are very weakly damped; secondly, the controller is just too basic to achieve the expected performance in such cases. The combination of these two issues results in oscillations around the notching levels and high amplitude beating immediately after the mode. The beating overshoots are a major risk source because they can result in the test being aborted if the qualification upper limit is exceeded. Although the abort is, in itself, a safety measure protecting the tested satellite, it increases the risk of structural fatigue, firstly because the abort threshold has been already reached, and secondly, because the test must restart at the same close-resonance frequency and remain there until the qualification level is reached and the sweep frequency can continue. The beat minimum relates only to small successive frequency ranges in which the qualification level is not reached. Although they are less problematic because they do not cause an inadvertent test shutdown, such situations inevitably result in waiver requests from the client. A controlled-system analysis indicates an operating principle that cannot provide sufficient stability: the drive calculation (which controls the process) simply multiplies the frequency reference (usually called cola) and a function of the following setpoint, the ratio between the amplitude already reached and the previous setpoint, and the compression factor. This function value changes at each cola interval, but it never takes into account the sensor signal phase. Because of these limitations, we firstly examined whether it was possible to empirically determine, using a series of tests with a very simple dummy, a controller setting process that significantly improves the results. As the attempt failed, we have performed simulations seeking an optimum adjustment by finding the Least Mean Square of the difference between the reference and response signal. The simulations showed a significant improvement during the notch beat and a small reduction in the beat amplitude. However, the small improvement in this process was not useful because it highlighted the need to change the reference at each cola interval, sometimes with instructions almost twice the qualification level. Another uncertainty regarding the consequences of such an approach involves the impact of differences between the estimated model (used in the simulation) and the actual system. As limitations in the current controller were identified in different approaches, we considered the feasibility of a new controller that takes into account an estimated single-input multi-output (SIMO) model. Its parameters were estimated from a very low-level throughput. Against this backdrop, we analyzed the feasibility of an LQG control in cancelling beating, and this article highlights the relevance of such an approach.