• Title/Summary/Keyword: Granularity

Search Result 195, Processing Time 0.034 seconds

Extended GTRBAC Model for Access Control Enforcement in Enterprise Environments (기업환경의 접근제어를 위한 확장된 GTRBAC 모델)

  • Park Dong-Eue;Hwang Yu-Dong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.2
    • /
    • pp.211-224
    • /
    • 2005
  • With the wide acceptance of the Internet and the Web, volumes of information and related users have increased and companies have become to need security mechanisms to effectively protect important information for business activities and security problems have become increasingly difficult. This paper proposes a improved access control model for access control enforcement in enterprise environments through the integration of the temporal constraint character of the GT-RBAC model and sub-role hierarchies concept. The proposed model, called Extended GT-RBAC(Extended Generalized Temporal Role Based Access Control) Model, supports characteristics of GTRBAC model such as of temporal constraint, various time-constrained cardinality, control now dependency and separation of duty constraints(SoDs). Also it supports unconditional inheritance based on the degree of inheritance and business characteristics by using sub-roles hierarchies in order to allow expressing access control policies at a finer granularity in corporate enterprise environments.

  • PDF

Adaptive Differentiated Integrated Routing Scheme for GMPLS-based Optical Internet

  • Wei, Wei;Zeng, Qingji;Ye, Tong;Lomone, David
    • Journal of Communications and Networks
    • /
    • v.6 no.3
    • /
    • pp.269-279
    • /
    • 2004
  • A new online multi-layer integrated routing (MLIR) scheme that combines IP (electrical) layer routing with WDM (optical) layer routing is investigated. It is a highly efficient and cost-effective routing scheme viable for the next generation integrated optical Internet. A new simplified weighted graph model for the integrated optical Internet consisted of optical routers with multi-granularity optical-electrical hybrid switching capability is firstly proposed. Then, based on the proposed graph model, we develop an online integrated routing scheme called differentiated weighted fair algorithm (DWFA) employing adaptive admission control (routing) strategies with the motivation of service/bandwidth differentiation, which can jointly solve multi-layer routing problem by simply applying the minimal weighted path computation algorithm. The major objective of DWFA is fourfold: 1) Quality of service (QoS) routing for traffic requests with various priorities; 2) blocking fairness for traffic requests with various bandwidth granularities; 3) adaptive routing according to the policy parameters from service provider; 4) lower computational complexity. Simulation results show that DWFA performs better than traditional overlay routing schemes such as optical-first-routing (OFR) and electrical-first-routing (EFR), in terms of traffic blocking ratio, traffic blocking fairness, average traffic logical hop counts, and global network resource utilization. It has been proved that the DWFA is a simple, comprehensive, and practical scheme of integrated routing in optical Internet for service providers.

A Study for Improvement of the Testing Methods for Quality Control of Recycled Aggregate (순환골재의 품질평가를 위한 시험방법 개선에 관한 실험적 연구)

  • Jaung, Jae-Dong;Lee, Do-Heun
    • Journal of the Korea Institute of Building Construction
    • /
    • v.8 no.4
    • /
    • pp.105-114
    • /
    • 2008
  • This study investigates the saturation level of surface dryness, quantity of adhesive mortar, and the alien substance content of recycled aggregates for concrete to develop an adequate quality testing method for understanding the properties of recycled aggregates, which differ greatly from preexisting aggregates. For tests that measure the saturation level of surface dryness, where detail methods are applied differently according to the tester, various testing methods from across world were compared and analyzed. This study revealed that when measuring the saturation level of surface dryness of a certain sample, aggregates must be supplemented immediately whenever the height of the sample becomes lower than the measuring mold, and allowing the tamper to free fall on the sample will provide the most accurate results. When measuring the quantity of adhesive mortar of recycled aggregates for concrete, an acid solution was used, and since the quantity of adhesive mortar increases as the particle sizes gets smaller, the sample for testing should represent the entire granularity. Sulfuric acid solution is adequate for immersion, and the concentration should be 20% for best results. According to the alien substance content measurement, which was examined by the naked eye, the error range caused by the difference in particle size was neglectable, and therefore the sample should be $2.5{\sim}5.0mm$ in size concerning the accuracy and measuring time. Also, for coarse recycled aggregates, the sample should amount to 1kg for measuring alien substance content by the naked eye, which proves that assortment by the naked eye is the most adequate method for measuring the alien substance content of a recycled aggregate.

Load Distribution Policy of Web Server using Subsequent Load and HTTP Connection Time (잠재 부하 정보와 HTTP 연결의 에이징을 통한 HTTP 연결 스케줄링 알고리즘)

  • Kim Si-Yeon;Kim Sungchun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.717-721
    • /
    • 2005
  • With HTTP/1.0, a single request means a single HTTP connection so that the granular unit of dispatching is the same as real load. But with persistent HTTP connection, multiple requests may arrive on a single TCP connection. Therefore, a scheme that dispatches load at the granularity of individual requests constrains the feasible dispatching policies In this paper we propose a new connection dispatching polity for supporting HTTP/1.1 persistent connections in cluster-based Web servers. When the request of a base html file arrives, the dispatcher gets the subsequent load arriving on that connection using the embedded objects information. After the dispatcher stores the load information in Load Table, the dispatcher employs the connection aging strategy on live persistent connections on the passage of time. The results of simulation show about $1.7\%\~16.8\%$ improved average response time compared to existing WLC algorithm.

Application Behavior-oriented Adaptive Remote Access Cache in Ring based NUMA System (링 구조 NUMA 시스템에서 적응형 다중 그레인 원격 캐쉬 설계)

  • 곽종욱;장성태;전주식
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.9
    • /
    • pp.461-476
    • /
    • 2003
  • Due to the implementation ease and alleviation of memory bottleneck effect, NUMA architecture has dominated in the multiprocessor systems for the past several years. However, because the NUMA system distributes memory in each node, frequent remote memory access is a key factor of performance degradation. Therefore, efficient design of RAC(Remote Access Cache) in NUMA system is critical for performance improvement. In this paper, we suggest Multi-Grain RAC which can adaptively control the RAC line size, with respect to each application behavior Then we simulate NUMA system with multi-grain RAC using MINT, event-driven memory hierarchy simulator. and analyze the performance results. At first, with profile-based determination method, we verify the optimal RAC line size for each application and, then, we compare and analyze the performance differences among NUMA systems with normal RAC, with optimal line size RAC, and with multi-grain RAC. The simulation shows that the worst case can be always avoided and results are very close to optimal case with any combination of application and RAC format.

Efficient Skyline Computation on Time-Interval Data Streams (유효시간 데이터 스트림에서의 스카이라인 질의 알고리즘)

  • Park, Nam-Hun;Chang, Joong-Hyuk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.1
    • /
    • pp.370-381
    • /
    • 2012
  • Multi-criteria result extraction is crucial in many scientific applications that support real-time stream processing, such as habitat research and disaster monitoring. Skyline evaluation is computational intensive especially over continuous time-interval data streams where each object has its own customized expiration time. In this work, we propose TI-Sky - a continuous skyline evaluation framework. To ensure correctness, the result space needs to be continuously maintained as new objects arrive and older objects expire. TI-Sky strikes a perfect balance between the costs of continuously maintaining the result space and the costs of computing the final skyline result from this space whenever a pull-based user query is received. Our key principle is to incrementally maintain a partially precomputed skyline result space - however doing so efficiently by working at a higher level of abstraction. TI-Sky's algorithms for insertion, deletion, purging and result retrieval exploit both layers of granularity. Our experimental study demonstrates the superiority of TI-Sky over existing techniques to handle a wide variety of data sets.

Development of Signature Generation and Update System for Application-level Traffic Classification (응용 레벨 트래픽 분류를 위한 시그니쳐 생성 및 갱신 시스템 개발)

  • Park, Jun-Sang;Park, Jin-Wan;Yoon, Sung-Ho;Lee, Hyun-Shin;Kim, Myung-Sup
    • The KIPS Transactions:PartC
    • /
    • v.17C no.1
    • /
    • pp.99-108
    • /
    • 2010
  • The traffic classification is a preliminary but essentialstep for stable network service provision and efficient network resource management. While various classification methods have been introduced in literature, the payload signature-based classification is accepted to give the highest performance in terms of accuracy, completeness, and practicality. However, the collection and maintenance of up-to-date signatures is very difficult and time consuming process to cope with the dynamics of Internet traffic over time. In this paper, We propose an automatic payload signature generation mechanism which reduces the time for signature generation and increases the granularity of signatures. Furthermore, We describe a signature update system to keep the latest signatures over time. By experiments with our campus network traffic we proved the feasibility of our mechanism.

SPMLD: Sub-Packet based Multipath Load Distribution for Real-Time Multimedia Traffic

  • Wu, Jiyan;Yang, Jingqi;Shang, Yanlei;Cheng, Bo;Chen, Junliang
    • Journal of Communications and Networks
    • /
    • v.16 no.5
    • /
    • pp.548-558
    • /
    • 2014
  • Load distribution is vital to the performance of multipath transport. The task becomes more challenging in real-time multimedia applications (RTMA), which impose stringent delay requirements. Two key issues to be addressed are: 1) How to minimize end-to-end delay and 2) how to alleviate packet reordering that incurs additional recovery time at the receiver. In this paper, we propose sub-packet based multipath load distribution (SPMLD), a new model that splits traffic at the granularity of sub-packet. Our SPMLD model aims to minimize total packet delay by effectively aggregating multiple parallel paths as a single virtual path. First, we formulate the packet splitting over multiple paths as a constrained optimization problem and derive its solution based on progressive approximation method. Second, in the solution, we analyze queuing delay by introducing D/M/1 model and obtain the expression of dynamic packet splitting ratio for each path. Third, in order to describe SPMLD's scheduling policy, we propose two distributed algorithms respectively implemented in the source and destination nodes. We evaluate the performance of SPMLD through extensive simulations in QualNet using real-time H.264 video streaming. Experimental results demonstrate that: SPMLD outperforms previous flow and packet based load distribution models in terms of video peak signal-to-noise ratio, total packet delay, end-to-end delay, and risk of packet reordering. Besides, SPMLD's extra overhead is tiny compared to the input video streaming.

Dynamic Data Cubes Over Data Streams (데이타 스트림에서 동적 데이타 큐브)

  • Seo, Dae-Hong;Yang, Woo-Sock;Lee, Won-Suk
    • Journal of KIISE:Databases
    • /
    • v.35 no.4
    • /
    • pp.319-332
    • /
    • 2008
  • Data cube, which is multi-dimensional data model, have been successfully applied in many cases of multi-dimensional data analysis, and is still being researched to be applied in data stream analysis. Data stream is being generated in real-time, incessant, immense, and volatile manner. The distribution characteristics of data arc changing rapidly due to those characteristics, so the primary rule of handling data stream is to check once and dispose it. For those characteristics, users are more interested in high support attribute values observed rather than the entire attribute values over data streams. This paper propose dynamic data cube for applying data cube to data stream environment. Dynamic data cube specify user's interested area by the support ratio of attribute value, and dynamically manage the attribute values by grouping each other. By doing this it reduce the memory usage and process time. And it can efficiently shows or emphasize user's interested area by increasing the granularity for attributes that have higher support. We perform experiments to verify how efficiently dynamic data cube works in limited memory usage.

Policy System of Data Access Control for Web Service (웹 서비스를 위한 데이터 접근 제어의 정책 시스템)

  • Jo, Sun-Moon;Chung, Kyung-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.11
    • /
    • pp.25-32
    • /
    • 2008
  • Access control techniques should be flexible enough to support all protection granularity levels. Since access control policies are very likely to be specified in relation to document types, it is necessary to properly manage a situation in which documents fail to be dealt with by the existing access control policies. In terms of XML documents, it is necessary to describe policies more flexibly beyond simple authorization and to consider access control methods which can be selected. This paper describes and designs the access control policy system for authorization for XML document access and for efficient management to suggest a way to use the capacity of XML itself. The system in this paper is primarily characterized by consideration of who would exercise what access privileges on a specific XML document and by good adjustment of organization-wide demands from a policy manager and a single document writer.