• Title/Summary/Keyword: Resource Sharing Service

Search Result 177, Processing Time 0.027 seconds

Quality of Service Tradeoff in Device to Device Communication Underlaid Cellular Infrastructure

  • Boabang, Francis;Hwang, Won-Joo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.591-593
    • /
    • 2016
  • Device-to-device (D2D) communications underlaid cellular infrastructure is an competitive local area services technology to promote spectrum usage for next generation cellular networks. These potential can only be tap through efficient interference coordination. Previous works only concentrated on interference from D2D pairs whiles interference from CUs to D2D pairs were neglected. This work focus on solving uplink interference problem emanating from multiple CUs sharing its resource with multiple D2D pairs. The base station (BS) acting as a supervisor selfishly institute a pricing scheme to manage the interference it experience from D2D pairs based on its Quality of service (QoS) requirement. D2D pairs following the supervisor make power allocation decisions considering the price from the BS in a non-cooperative game fashion. In order for the D2D pairs to also meet their QoS requirement, they suggest a price to the BS called discount price which reflects the interference they receive from the CUs. Finally, we analyze the proposed approach.

  • PDF

QoS- and Revenue Aware Adaptive Scheduling Algorithm

  • Joutsensalo, Jyrki;Hamalainen, Timo;Sayenko, Alexander;Paakkonen, Mikko
    • Journal of Communications and Networks
    • /
    • v.6 no.1
    • /
    • pp.68-77
    • /
    • 2004
  • In the near future packet networks should support applications which can not predict their traffic requirements in advance, but still have tight quality of service requirements, e.g., guaranteed bandwidth, jitter, and packet loss. These dynamic characteristics mean that the sources can be made to modify their data transfer rates according to network conditions. Depending on the customer&; needs, network operator can differentiate incoming connections and handle those in the buffers and the interfaces in different ways. In this paper, dynamic QoS-aware scheduling algorithm is presented and investigated in the single node case. The purpose of the algorithm is in addition to fair resource sharing to different types of traffic classes with different priorities ?to maximize revenue of the service provider. It is derived from the linear type of revenue target function, and closed form globally optimal formula is presented. The method is computationally inexpensive, while still producing maximal revenue. Due to the simplicity of the algorithm, it can operate in the highly nonstationary environments. In addition, it is nonparametric and deterministic in the sense that it uses only the information about the number of users and their traffic classes, not about call density functions or duration distributions. Also, Call Admission Control (CAC) mechanism is used by hypothesis testing.

Development of Win32 API Message Authorization System for Windows based Application Provision Service (윈도우 기반 응용프로그램 제공 서비스를 위한 Win32 API 메시지 인가 시스템의 개발)

  • Kim, Young-Ho;Jung, Mi-Na;Won, Yong-Gwan
    • The KIPS Transactions:PartC
    • /
    • v.11C no.1
    • /
    • pp.47-54
    • /
    • 2004
  • The growth of computer resource and network speed has increased requests for the use of remotely located computer systems by connecting through computer networks. This phenomenon has hoisted research activities for application service provision that uses server-based remote computing paradigm. The server-based remote computing paradigm has been developed as the ASP (Application Service Provision) model, which provides remote users through application sharing protocol to application programs. Security requirement such as confidentiality, availability, integrity should be satisfied to provide ASP service using centralized computing system. Existing Telnet or FTP service for a remote computing systems have satisfied security requirement by a simple access control to files and/or data. But windows-based centralized computing system is vulnerable to confidentiality, availability, integrity where many users use the same application program installed in the same computer. In other words, the computing system needs detailed security level for each user different from others, such that only authorized user or group of users can run some specific functional commands for the program. In this paper, we propose windows based centralized computing system that sets security policies for each user for the use of instructions of the application programs, and performs access control to the instructions based on the security policies. The system monitors all user messages which are executed through graphical user interface by the users connecting to the system. Ail Instructions, i.e. messages, for the application program are now passed to authorization process that decides if an Instruction is delivered to the application program based on the pre-defined security polices. This system can be used as security clearance for each user for the shared computing resource as well as shared application programs.

A Linear System Approach to Serving Gaussian Traffic in Packet-Switching Networks (패킷 교환망에서 가우스 분포 트래픽을 서비스하는 선형 시스템 접근법)

  • Chong, Song;Shin, Min-Su;Chong, Hyun-Hee
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.5
    • /
    • pp.553-561
    • /
    • 2002
  • We present a novel service discipline, called linear service discipline, to serve multiple QoS queues sharing a resource and analyze its properties. The linear server makes the output traffic and the queueing dynamics of individual queues as a linear function of its input traffic. In particular, if input traffic is Gaussian, the distributions of queue length and output traffic are also Gaussian with their mean and variance being a function of input mean and input power spectrum (equivalently, autocorrelation function of input). Important QoS measures including buffer overflow probability and queueing delay distribution are also expressed as a function of input mean and input power spectrum. This study explores a new direction for network-wide traffic management based on linear system theories by letting us view the queueing process at each node as a linear filter.

A Study on the Influence of Shipping Firms' Knowledge Management on their Service Capabilities (지식경영이 해운선사의 서비스 역량에 미치는 영향에 관한 연구)

  • Choe, YunSeok;Lee, SangYoon
    • Journal of Korea Port Economic Association
    • /
    • v.28 no.3
    • /
    • pp.91-110
    • /
    • 2012
  • In the modern management literature, knowledge has been recognized as a new strategic resource enabling a firm to create its competitiveness. Shipping companies under fierce competitive structure need to pay attentions to the utility of knowledge management. A shipping firm may develop its unique service capability by classifying, sharing, and transferring the data, information and knowledge obtained from both inside and outside its global service network. The current study attempts to analyze influential relationships between liner shipping firms' knowledge management and service capabilities. In order to achieve this goal, this study designed a knowledge chain model measuring shipping companies' knowledge management levels and tested its validity and reliability based on a total of 80 replied questionnaires from national and foreign liners. The empirical result presents that supportive and primary activities composing a knowledge chain could exert significant positive influences on the enforcement of shipping service capabilities. This research shows that the utility of knowledge management is adoptable in the maritime industry, and recommends that shipping firms should recognize strategic importance of knowledge and actively pursue knowledge management at an entire firm level.

A Reputation Management Scheme Improving the Trustworthiness of Multi-peers and Shared Resources in P2P Networks (다중 피어 및 공유 자원의 신뢰성 향상을 위한 P2P 네트워크의 평판 관리)

  • Shin, Jung-Hwa;Kim, Tae-Hoon;Tak, Sung-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.10
    • /
    • pp.1409-1419
    • /
    • 2008
  • Inauthentic resources can be easily spread by P2P (Peer-to-Peer) participants due to the openness and anonymity of P2P networks. A possible way to restrict the distribution of inauthentic resources and prevent malicious peers from joining P2P networks is to exploit peers' reputation which reflect their past behaviors and are also helpful to predict peers' future behaviors. There is a possibility that some peers intentionally plays along with other peers in order to increase/decrease its reputation through false feedback exchanges. Therefore, we propose a new reputation management scheme, called TrustRRep (Trustable Resource sharing service using Reputation) scheme, which improves the trustworthiness and efficiency of P2P networks by identifying peers who give false feedback. The TrustRRep scheme is also capable of providing peers with the trustworthiness of shared resources by discriminating resources distributed by malicious peers. We implement the proposed TrustRRep scheme on the NS-2 simulator for evaluating its performance compared to the recent reputation management work available in literature. A case study on simulations shows that the proposed reputation management scheme yields efficient performance in terms of the minimal download ratio and dissemination of inauthentic resources, the efficient identification of peers who give false feedback, and the provisioning of the trustworthiness of peers' reputation. It also shows that the proposed TrustRRep scheme imposes the restrictions of participating P2P networks on a malicious peers by diminishing its trust value.

  • PDF

State of Information Technology and Its Application in Agricultural Meteorology (농업기상활용 정보기술 현황)

  • Byong-Lyol Lee;Dong-Il Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.6 no.2
    • /
    • pp.118-126
    • /
    • 2004
  • Grid is a new Information Technology (IT) concept of "super Internet" for high-performance computing: worldwide collections of high-end resources such as supercomputers, storage, advanced instruments and immerse environments. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, real-time data sources and instruments, and human collaborators. The term "the Grid" was coined in the mid1990s to denote a proposed distributed computing infrastructure for advanced science and engineering. The term computational Grids refers to infrastructures aimed at allowing users to access and/or aggregate potentially large numbers of powerful and sophisticated resources. More formally, Grids are defined as infrastructure allowing flexible, secure, and coordinated resource sharing among dynamic collections of individuals, institutions and resources referred to as virtual Organizations. GRID is an emerging IT as a kind of next generation Internet technology which will fit very well with agrometeorological services in the future. I believe that it would contribute to the resource sharing in agrometeorology by providing super computing power, virtual storage, and efficient data exchanges, especially for developing countries that are suffering from the lack of resources for their agmet services at national level. Thus, the establishment of CAgM-GRID based on existing RADMINSII is proposed as a part of FWIS of WMO.part of FWIS of WMO.

Ontology Based Semantic Information System for Grid Computing (그리드 컴퓨팅을 위한 온톨로지 기반의 시맨틱 정보 시스템)

  • Han, Byong-John;Kim, Hyung-Lae;Jeong, Chang-Sung
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.87-103
    • /
    • 2009
  • Grid computing is an expanded technology of distributed computing technology to use low-cost and high-performance computing power in various fields. Although the purpose of Grid computing focuses on large-scale resource sharing, innovative applications, and in some case, high-performance orientation, it has been used as conventional distributed computing environment like clustered computer until now because Grid middleware does not have common sharable information system. In order to use Grid computing environment efficiently which consists of various Grid middlewares, it is necessary to have application-independent information system which can share information description and services, and expand them easily. Thus, in this paper, we propose a semantic information system framework based on web services and ontology for Grid computing environment, called WebSIS. It makes application and middleware developer easy to build sharable and extensible information system which is easy to share information description and can provide ontology based platform-independent information services. We present efficient ontology based information system architecture through WebSIS. Discovering appropriate resource for task execution on Grid needs more high-level information processing because Grid computing environment is more complex than other traditional distributed computing environments and has various considerations which are needed for Grid task execution. Thus, we design and implement resource information system and services by using WebSIS which enables high-level information processing by ontology reasoning and semantic-matching, for automation of task execution on Grid.

  • PDF

A Execution Performance Analysis of Applications using Multi-Process Service over GPU (다중 프로세스 서비스를 이용한 GPU 응용 동시 실행 성능 분석)

  • Kim, Se-Jin;Oh, Ji-Sun;Kim, Yoonhee
    • KNOM Review
    • /
    • v.22 no.1
    • /
    • pp.60-67
    • /
    • 2019
  • Graphical Processing Units(GPUs) achieve high performance undertaking from relatively uniformed computation in parallel. The technology related to General Purpose GPU(GPGPU) has been enhanced, which provides concurrent kernel execution of multi and diverse applications at the same time, but it is still limited to support resource sharing or planning. NVIDIA recently introduces Multi-Process Service(MPS), which allows kernels from different applications can be execute concurrently. However, the strength of MPS comes along with the characteristics of applications and the order of their execution. This paper shows the performance analysis of diverse scientific applications in real world. Based on the analysis, we prove that it is important to the identify characteristics of co-run applications, and to schedule multiple applications via profiling to maximize MPS functionality.

REDUCING LATENCY IN SMART MANUFACTURING SERVICE SYSTEM USING EDGE COMPUTING

  • Vimal, S.;Jesuva, Arockiadoss S;Bharathiraja, S;Guru, S;Jackins, V.
    • Journal of Platform Technology
    • /
    • v.9 no.1
    • /
    • pp.15-22
    • /
    • 2021
  • In a smart manufacturing environment, more and more devices are connected to the Internet so that a large volume of data can be obtained during all phases of the product life cycle. The large-scale industries, companies and organizations that have more operational units scattered among the various geographical locations face a huge resource consumption because of their unorganized structure of sharing resources among themselves that directly affects the supply chain of the corresponding concerns. Cloud-based smart manufacturing paradigm facilitates a new variety of applications and services to analyze a large volume of data and enable large-scale manufacturing collaboration. The manufacturing units include machinery that may be situated in different geological areas and process instances that are executed from different machinery data should be constantly managed by the super admin to coordinate the manufacturing process in the large-scale industries these environments make the manufacturing process a tedious work to maintain the efficiency of the production unit. The data from all these instances should be monitored to maintain the integrity of the manufacturing service system, all these data are computed in the cloud environment which leads to the latency in the performance of the smart manufacturing service system. Instead, validating data from the external device, we propose to validate the data at the front-end of each device. The validation process can be automated by script validation and then the processed data will be sent to the cloud processing and storing unit. Along with the end-device data validation we will implement the APM(Asset Performance Management) to enhance the productive functionality of the manufacturers. The manufacturing service system will be chunked into modules based on the functionalities of the machines and process instances corresponding to the time schedules of the respective machines. On breaking the whole system into chunks of modules and further divisions as required we can reduce the data loss or data mismatch due to the processing of data from the instances that may be down for maintenance or malfunction ties of the machinery. This will help the admin to trace the individual domains of the smart manufacturing service system that needs attention for error recovery among the various process instances from different machines that operate on the various conditions. This helps in reducing the latency, which in turn increases the efficiency of the whole system