• Title/Summary/Keyword: request scheduling

Search Result 90, Processing Time 0.033 seconds

Scheduling Algorithms and Queueing Response Time Analysis of the UNIX Operating System (UNIX 운영체제에서의 스케줄링 법칙과 큐잉응답 시간 분석)

  • Im, Jong-Seol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.3
    • /
    • pp.367-379
    • /
    • 1994
  • This paper describes scheduling algorithms of the UNIX operating system and shows an analytical approach to approximate the average conditional response time for a process in the UNIX operating system. The average conditional response time is the average time between the submittal of a process requiring a certain amount of the CPU time and the completion of the process. The process scheduling algorithms in thr UNIX system are based on the priority service disciplines. That is, the behavior of a process is governed by the UNIX process schuduling algorithms that (ⅰ) the time-shared computer usage is obtained by allotting each request a quantum until it completes its required CPU time, (ⅱ) the nonpreemptive switching in system mode and the preemptive switching in user mode are applied to determine the quantum, (ⅲ) the first-come-first-serve discipline is applied within the same priority level, and (ⅳ) after completing an allotted quantum the process is placed at the end of either the runnable queue corresponding to its priority or the disk queue where it sleeps. These process scheduling algorithms create the round-robin effect in user mode. Using the round-robin effect and the preemptive switching, we approximate a process delay in user mode. Using the nonpreemptive switching, we approximate a process delay in system mode. We also consider a process delay due to the disk input and output operations. The average conditional response time is then obtained by approximating the total process delay. The results show an excellent response time for the processes requiring system time at the expense of the processes requiring user time.

  • PDF

An Efficient Caching Strategy in Data Broadcasting (데이터 방송 환경에서의 효율적인 캐슁 정책)

  • Kim, Su-Yeon;Choe, Yang-Hui
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.12
    • /
    • pp.1476-1484
    • /
    • 1999
  • TV 방송 분야에서 다양한 정보와 상호 작용성을 제공하기 위해서 최근 기존 방송 내용인 A/V 스트림 외 부가정보 방송이 시도되고 있다. 데이타 방송에 대한 기존 연구는 대부분 고정된 내용의 데이타를 방송하는 환경을 가정하고 있어서 그 결과가 방송 내용의 변화가 많은 환경에 부적합하다. 본 논문에서는 데이타에 대한 접근이 반복되지 않을 가능성이 높고 사용자 접근 확률을 예상하기 어려운 상황에서 응답 시간을 개선하는 방안으로 수신 데이타를 무조건 캐쉬에 반입하고 교체가 필요한 경우 다음 방송 시각이 가장 가까운 페이지를 축출하는 사용자 단말 시스템에서의 캐슁 정책을 제안하였다. 제안된 캐쉬 관리 정책은 평균적인 캐쉬 접근 실패 비용을 줄임으로써 사용자 응답 시간을 개선하며, 서로 다른 스케줄링 기법을 사용하는 다양한 방송 제공자가 공존하는 환경에서 보편적으로 효과를 가져올 수 있다.Abstract Recently, many television broadcasters have tried to disseminate digital multimedia data in addition to the traditional content (audio-visual stream). The broadcast data need to be cached by a client system, to provide a reasonable response time for a user request. Previous studies assumed the dissemination of a fixed set of items, and the results are not suitable when broadcast items are frequently changed. In this paper, we propose a novel cache management scheme that chooses the replacement victim based on the remaining time to the next broadcast instance. The proposed scheme reduces response time, where it is hard to predict the probability distribution of user accesses. The caching policy we present here significantly reduces expected response time by minimizing expected cache miss penalty, and can be applied without difficulty to different scheduling algorithms.

Dynamic Stream Merging Scheme for Reducing the Initial Latency Time and Enhancing the Performance of VOD Servers (VOD 서버의 초기 대기시간 최소화와 성능 향상을 위한 동적 스트림 합병 기법)

  • 김근혜;최황규
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.5
    • /
    • pp.529-546
    • /
    • 2002
  • A VOD server, which is the central component for constructing VOD systems, requires to provide high bandwidth and continuous real-time delivery. It is also necessary to the sophisticated disk scheduling and data placement schemes in VOD sewers. One of the most common problem facing in such a system is the high initial latency time to service multiple users concurrently. In this paper, we propose a dynamic stream merging scheme for reducing the initial latency time in VOD servers. The proposed scheme allows clients to merge streams on a request as long as their requests fall within the reasonable time interval. The basic idea behind the dynamic stream merging is to merge multiple streams into one by increasing the frame rate of each stream. In the performance study, the proposed scheme can reduce the initial latency time under the minimum buffer use and also can enhance the performance of the VOD server with respect to the capacity of user admission.

  • PDF

Content_based Load Balancing Technique In Web Server Cluster (웹 서버 클러스터에서 내용 기반으로한 부하 분산 기법)

  • Myung, Won-Shig;Jang, Tea-Mu
    • The KIPS Transactions:PartA
    • /
    • v.10A no.6
    • /
    • pp.729-736
    • /
    • 2003
  • With the rapid growth of the Internet, popular Web sites are visited so frequently that these cannot be constructed with a single server or mirror site of high performance. The rapid increase of Internet uses and uses raised the problems of overweighted transmission traffic and difficult load balancing. To solve these, various schemes of server clustering have been surveyed. Especially, in order to fully utilize the performance of computer systems in a cluster, a good scheduling method that distributes user requests evenly to servers in required. In this paper, we propose a new method for reducing the service latency. In our method, each Web server in the cluster has different content. This helps to reduce the complexity of load balancing algorithm and the service latency The Web server that received a request from the load balancer responds to the client directly without passing through the load balancer. Simulation studies show that our method performs better than other traditional methods. In terms of response time, our method shows shorter latency than RR (Round Robin) and LC (Least Connection) by about 16%, 14% respectively.

Hashing Method with Dynamic Server Information for Load Balancing on a Scalable Cluster of Cache Servers (확장성 있는 캐시 서버 클러스터에서의 부하 분산을 위한 동적 서버 정보 기반의 해싱 기법)

  • Hwak, Hu-Keun;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.14A no.5
    • /
    • pp.269-278
    • /
    • 2007
  • Caching in a cache sorrel cluster environment has an advantage that minimizes the request and response tine of internet traffic and web user. Then, one of the methods that increases the hit ratio of cache is using the hash function with cooperative caching. It is keeping a fixed size of the total cache memory regardless of the number of cache servers. On the contrary, if there is no cooperative caching, the total size of cache memory increases proportional to the number of cache sowers since each cache server should keep all the cache data. The disadvantage of hashing method is that clients' requests stress a few servers in all the cache servers due to the characteristics of hashing md the overall performance of a cache server cluster depends on a few servers. In this paper, we propose the method that distributes uniformly client requests between cache servers using dynamic server information. We performed experiments using 16 PCs. Experimental results show the uniform distribution o

Dynamic slot allocation scheme for rt-VBR services in the wireless ATM networks (무선 ATM망에서 rt-VBR 서비스를 위한 동적 슬롯 할당 기법)

  • Yang, Seong-Ryoung;Lim, In-Taek;Heo, Jeong-Seok
    • The KIPS Transactions:PartC
    • /
    • v.9C no.4
    • /
    • pp.543-550
    • /
    • 2002
  • This paper proposes the dynamic slot allocation method for real-time VBR (rt-VBR) services in wireless ATM networks. The proposed method is characterized by a contention-based mechanism of the reservation request, a contention-free polling scheme for transferring the dynamic parameters. The base station scheduler allocates a dynamic parameter minislot to the wireless terminal for transferring the residual lifetime and the number of requesting slots as the dynamic parameters. The scheduling algorithm uses a priority scheme based on the maximum cell transfer delay parameter. Based on the received dynamic parameters, the scheduler allocates the uplink slots to the wireless terminal with the most stringent delay requirement. The simulation results show that the proposed method guarantee the delay constraint of rt-VBR services along with its cell loss rate significantly reduced.

A Study on Development of the Reliability Evaluation System for VVVF Urban Transit (VVVF 도시철도 차량의 신뢰성 평가 시스템 개발에 관한 연구)

  • Bae Chul-Ho;Kim Sung-Bin;Lee Ho-Yong;Chang Suk-Hwa;Suh Myung-Won
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.5
    • /
    • pp.7-18
    • /
    • 2005
  • Over the past twenty years, the maintenance system has been developed and its importance has been increased. For the effective maintenance of the urban transit, we have developed the maintenance system based on the concept of RCM(Reliability Centered Maintenance). RCM analysis is a systematic approach to developing a cost-effective maintenance strategy based on the various components's reliability of the system in question. It is performed according to process that includes the following steps; definition of function and functional failures of the systems, construction of RB D(Reliability Block Diagram), performance of FMEA(Failure Modes & Effects Analysis) and calculation of the reliability index. The final process of RCM is to determine appropriate failure maintenance strategies. This paper aims to define the procedure of maintenace based on the concept of RCM for urban transit. The key for a successful maintenance system is an automated scheduling to the maximum extent possible and timely executions. The developed system issues maintenance plan and repair request based on analyzed data and maintenance experience.

A Weight based GTS Allocation Scheme for Fair Queuing in IEEE 802.15.4 LR-WPAN (IEEE 802.15.4 LR-WPAN 환경에서 공정 큐잉을 위한 가중치 기반 GTS 할당 기법)

  • Lee, Kyoung-Hwa;Lee, Hyeop-Geon;Shin, Yong-Tae
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.9
    • /
    • pp.19-28
    • /
    • 2010
  • The GTS(Guaranteed Time Slot) of the IEEE 802.15.4 standard, which is the contention free access mechanism, is used for low-latency applications or applications requiring specific data bandwidth. But it has some problems such as delay of service due to FIFS(First In First Service) scheduling. In this paper, we proposes a weight based GTS allocation scheme for fair queuing in IEEE 802.15.4 LR-WPAN. The proposed scheme uses a weight that formed by how much more weight we give to the recent history than to the older history for a new GTS allocation. This scheme reduces service delay time and also guarantees transmission simultaneously within a limited time. The results of the performance analysis shows that our approach improves the performance as compared to the native explicit allocation mechanism defined in the IEEE 802.15.4 standard.

Analysis of Impact on ERP Customization Module Using CSR Data

  • Yoo, Byung-Keun;Kim, Seung-Hee
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.473-488
    • /
    • 2021
  • The enterprise resource planning (ERP) system is a standardized and advanced business process that many companies are implementing now-a-days through customization. However, it affects the efficiency of operations as these customizations are based on uniqueness. In this study, we analyzed the impact of customized modules and processing time on customer service request (CSR), by utilizing the stacked CSR data during the construction and operation of ERP, focusing on small and medium-sized enterprises (SMEs). As a result, a positive correlation was found between unit companies and the length of ERP implementation; ERP modules and the length of ERP implementation; ERP modules and unit companies; and the type of ERP implementation and ERP module. In terms of CSR, a comparison of CSR processing time of CBO (customized business object) module and STD (standard) module revealed that while the five modules did not display statistically significant differences, one module demonstrated a statistically very significant difference. In sum, the analysis indicates that the CBO-type CSR and its processing cost are higher than those of STD-type CSR. These results indicate that companies planning to implement an ERP system should consider the ERP module and their customization ratio and level. It not only gives the theoretical validity that should be considered as an indicator for decision making when ERP is constructed, but also its implications on the impact of processing time suggesting that the maintenance costs and project scheduling of ERP software must also be considered. This study is the first to present the degree of impact on the operation and maintenance of customized modules based on actual data and can provide a theoretical basis for applying SW change ratio in the cost estimation of ERP system maintenance.

Effective Dynamic Broadcast Method in Hybrid Broadcast Environment (하이브리드 브로드캐스트 환경에서 효과적인 동적 브로드캐스팅 기법)

  • Choi, Jae-Hoon;Lee, Jin-Seung;Kang, Jae-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.103-110
    • /
    • 2009
  • We are witnessing rapid increase of the number of wireless devices available today such as cell phones, PDAs, Wibro enabled devices. Because of the inherent limitation of the bandwidth available for wireless channels, broadcast systems have attracted the attention of the research community. The main problem in this area is to develop an efficient broadcast program. In this paper, we propose a dynamic broadcast method that overcomes the limitations of static broadcast programs. It optimizes the scheduling based on the probabilistic model of user requests. We show that dynamic broadcast system can indeed improve the quality of service using user requests. This paper extends our previous work in [1] to include more thorough explanation of the proposed methodology and diverse performance evaluation models.