• Title/Summary/Keyword: caching algorithm

Search Result 106, Processing Time 0.027 seconds

An Out of Core Linear Direct Solution Method for Large Scale Structural Analysis (대규모 구조해석을 위한 보조기억장치 활용 선형 직접해법)

  • Kim, Min-Ki;Kim, Seung Jo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.42 no.6
    • /
    • pp.445-452
    • /
    • 2014
  • This paper discusses the multifrontal direct solution method with out of core storage for large scale structural analysis in a limited computing resource. Large scale structural analysis requires huge amount of memory space and computation, so out of core solution method is needed in limited computing resource. In this research, out of core multifrontal solution algorithm which utilize the small size of physical memory and minimize the amount of access of low speed out of core storage is introduced. Three ideas, which are stack space in lower trianglar part of square factorization matrix, inverse stack data structure and selective data caching and recovery by data block size, are proposed.

Prefetching Policy based on File Acess Pattern and Cache Area (파일 접근 패턴과 캐쉬 영역을 고려한 선반입 기법)

  • Lim, Jae-Deok;Hwang-Bo, Jun-Hyeong;Koh, Kwang-Sik;Seo, Dae-Hwa
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.447-454
    • /
    • 2001
  • Various caching and prefetching algorithms have been investigated to identify and effective method for improving the performance of I/O devices. A prefetching algorithm decreases the processing time of a system by reducing the number of disk accesses when an I/O is needed. This paper proposes an AMBA prefetching method that is an extended version of the OBA prefetching method. The AMBA prefetching method will prefetching blocks continuously as long as disk bandwidth is enough. In this method, though there were excessive data request rate, we would expect efficient prefetching. And in the AMBA prefetching method, to prevent the cache pollution, it limits the number of data blocks to be prefetched within the cache area. It can be implemented in a user-level File System based on a Linux Operating System. In particular, the proposed prefetching policy improves the system performance by about 30∼40% for large files that are accessed sequentially.

  • PDF

Distance Browsing Query Processing using Query Result Set (질의 결과를 이용한 거리 브라우징 질의의 처리)

  • Park Dong-Joo;Park Sangwon;Chung Tae-Sun;Lee Sang-Won
    • The KIPS Transactions:PartD
    • /
    • v.12D no.5 s.101
    • /
    • pp.673-682
    • /
    • 2005
  • Distance browsing queries, namely k-nearest neighbor queries, are the most important queries in spatial database applications, e.g., Geographic Information Systems(GISs). Recently, GIS applications trends to extend themselves toward wide multi-user environments such as the Web. Since many techniques for such queries, where Hjaltason and Samet's algorithm is the most efficient one, were optimized for only one query, we need to complement them suitable for multi-user environments. It can be a good approach that we store many individual query results in a cache, i.e., query result caching and reuse them in evaluating incoming queries, j.e., query result matching. In this paper, we propose a complementary Hjaltason and Samet's algerian capable of reusing previous query results in a cache for answering distance browsing queries in multi-user GIS environments. Our experimental results conform the efficiency of our approach.

Load Balancing Method for Query Processing Based on Cache Management in the Grid Database (그리드 데이터베이스에서 질의 처리를 위한 캐쉬 관리 기반의 부하분산 기법)

  • Shin, Soong-Sun;Back, Sung-Ha;Eo, Sang-Hun;Lee, Dong-Wook;Kim, Gyoung-Bae;Chung, Weon-Il;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.7
    • /
    • pp.914-927
    • /
    • 2008
  • Grid database management systems are used for large data processing, high availability and data integration in grid computing. Furthermore the grid database management systems are in the use of manipulating the queries that are sent to distributed nodes for efficient query processing. However, when the query processing is concentrated in a random node, it will be occurred with imbalance workload and decreased query processing. In this paper we propose a load balancing method for query processing based on cache Management in grid databases. This proposed method focuses on managing a cache in nodes by cache manager. The cache manager connects a node to area group and then the cache manager maintains a cached meta information in node. A node is used for caching the efficient meta information which is propagated to other node using cache manager. The workload of node is distributed by using caching meta information of node. This paper shows that there is an obvious improvement compared with existing methods, through adopting the proposed algorithm.

  • PDF

Development of GML Map Visualization Service and POI Management Tool using Tagging (GML 지도 가시화 서비스 및 태깅을 이용한 POI 관리 도구 개발)

  • Park, Yong-Jin;Song, Eun-Ha;Jeong, Young-Sik
    • Journal of Internet Computing and Services
    • /
    • v.9 no.3
    • /
    • pp.141-158
    • /
    • 2008
  • In this paper, we developed the GML Map Server which visualized the map based on GML as international standard for exchanging the common format map and for interoperability of GIS information. And also, it should transmit effectively GML map into the mobile device by using dynamic map partition and caching. It manages a partition based on the visualization area of a mobile device in order to visualize the map to a mobile device in real time, and transmits the partition area by serializing it for the benefit of transmission. Also, the received partition area is compounded in a mobile device and is visualized by being partitioned again as four visible areas based on the display of a mobile device. Then, the area is managed by applying a caching algorithm in consideration of repetitiveness for a received map for the efficient operation of resources. Also, in order to prevent the delay in transmission time as regards the instance density area of the map, an adaptive map partition mechanism is proposed for maintaining the regularity of transmission time. GML Map Server can trace the position of mobile device with WIPI environment in this paper. The field emulator can be created mobile devices and mobile devices be moved and traced it's position instead of real-world. And we developed POIM(POI Management) for management hierarchically POI information and for the efficiency POI search by using the individual tagging technology with visual interface.

  • PDF

Access Frequency Based Selective Buffer Cache Management Strategy For Multimedia News Data (접근 요청 빈도에 기반한 멀티미디어 뉴스 데이터의 선별적 버퍼 캐쉬 관리 전략)

  • Park, Yong-Un;Seo, Won-Il;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2524-2532
    • /
    • 1999
  • In this paper, we present a new buffer pool management scheme designed for video type news objects to build a cost-effective News On Demand storage server for serving users requests beyond the limitation of disk bandwidth. In a News On Demand Server where many of users request for video type news objects have to be serviced keeping their playback deadline, the maximum numbers of concurrent users are limited by the maximum disk bandwidth the server provides. With our proposed buffer cache management scheme, a requested data is checked to see whether or not it is worthy of caching by checking its average arrival interval and current disk traffic density. Subsequently, only granted news objects are permitted to get into the buffer pool, where buffer allocation is made not on the block basis but on the object basis. We evaluated the performance of our proposed caching algorithm through simulation. As a result of the simulation, we show that by using this caching scheme to support users requests for real time news data, compared with serving those requests only by disks, 30% of extra requests are served without additional cost increase.

  • PDF

Exploitation of Multi-Versions based on Callback Locking in a Client-Server DBMS Environment (클라이언트-서버 DBMS 환경에서 콜백 잠금 기반 다중 버전의 활용)

  • 강흠근;민준기;전석주;정진완
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.457-467
    • /
    • 2004
  • The efficiency of algorithms managing data caches has a major impact on the performance of systems that utilize client-side data caching. In these systems, two versions of data can be maintained without an additional space overhead of the server by exploiting the replication of data in the server's buffer and clients' caches. In this paper, we present a new cache consistency algorithm employing versions: Two Versions-Callback Locking (2V-CBL). Our experimental results indicate that 2V-CBL provides good performance, particularly outperforms a leading cache consistency algorithm, Asynchronous Avoidance-based Gache Consistency, when some clients run only read-only transactions.

Regular File Access of Embedded System Using Flash Memory as a Storage (플래시 메모리를 저장매체로 사용하는 임베디드 시스템에서의 정규파일 접근)

  • 이은주;박현주
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.1
    • /
    • pp.189-200
    • /
    • 2004
  • Recently Flash Memory which is small and low-powered is widely used as a storage of embedded system, because an embedded system requests portability and a fast response. To resolve a difference of access time between a storage and RAM, Linux is using disk caching which copies a part of file on disk into RAM. It is not also an exception on embedded system. A READ access-time of flash memory is similar to RAMs. So, when a process on an embedded system reads data, it is similar to the time to access cached data in RAM and to access directly data on a flash memory. On the embedded system using limited memory, using a disk cache is that wastes much time and memory spaces to manage it and can not reflects the characteristic of a flash memory. This paper proposes the regular file access of limited using a page cache in the file system based on a flash memory and reflects the characteristic of a flash memory. The proposed algorithm minimizes power consumption because access numbers of the RAM are reduced and doesn't waste a memory space because it accesses directly to a flash memory Therefore, the performance improvement of the system applying the proposed algorithm is expected.

  • PDF

An Efficient Algorithm for Restriction on Duplication Caching between Buffer and Disk Caches (버퍼와 디스크 캐시 사이의 중복 캐싱을 제한하는 효율적인 알고리즘)

  • Jung, Soo-Mok
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.10 no.1
    • /
    • pp.95-105
    • /
    • 2006
  • The speed of hard disk which is based on mechanical operation is more slow than processor. The growth of processor speed is rapid by semiconductor technology, but the growth of disk speed which is based on mechanical operation is not enough. Buffer cache in main memory and disk cache in disk controller have been used in computer system to solve the speed gap between processor and I/O subsystem. In this paper, an efficient buffer cache and disk cache management scheme was proposed to restrict duplicated disk block between buffer cache and disk cache. The performance of the proposed algorithm was evaluated by simulation.

  • PDF

Development of Directed Diffusion Algorithm with Enhanced Performance (향상된 성능을 갖는 Directed Diffusion 알고리즘의 개발)

  • Kim Si-Hwan;Han Yun-Jong;Kim Sung-Ho
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.527-530
    • /
    • 2005
  • 센서 네트워크는 다수의 센서 노드들이 싱크(sink) 노드와 데이터 중심(Data centric) 기반으로 통신을 하게 되는데 이때 사용되는 라우팅 알고리즘 중 한 가지가 Directed Diffusion이다. Directed Diffusion 은 싱크(sink)의 named data 질의 방송(diffuse)에 기반을 둔 라우팅 프로토콜(protocol)로 다수의 소스 노드와 다수의 싱크 노드의 상황에서도 효율적으로 동작한다는 점과 각각의 질의에 의한 라우팅 경로 중에 aggregation 과 caching을 수행할 수 있다는 장점을 갖는다. 그러나 강화된 gradient 패스를 얻기 위해 요구되는 부담이 크다는 단점을 갖는다. 따라서 본 연구에서는 interest 패킷에 hop-count를 도입하여 gradient가 과다하게 설정되는 것을 제한함으로써 에너지 사용 효율을 높일 수 있는 개선된 Directed Diffusion 알고리즘을 제시한다. 또한 시뮬레이션을 통해 제안된 알고리즘의 유용성 확인을 확인하고자 한다.

  • PDF