• Title/Summary/Keyword: 분산캐시

Search Result 74, Processing Time 0.03 seconds

Design and Implementation of A Distributed Shared Object Model for the Distributed Real-time Object, TMO (분산 실시간 객체 TMO를 위한 분산 공유 객체 모델의 설계 및 구현)

  • Choi, Young-Hwan;Kim, Jung-Guk;Han, Sueng-Yun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.502-505
    • /
    • 2011
  • RT-eCos3.0은 대표적 분산 실시간 객체 모델인 TMO(Time-triggered Message-triggered Object)의 실행을 제공하기 위하여 공개소스 eCos3.0 기반으로 개발된 초경량 경성 실시간 임베디드 운영체제이다. RT-eCos3.0에서는 분산 컴퓨팅 지원을 위하여 네트워크에 투명한 채널 기반 publisher/subscriber 모델의 멀티캐스트 분산 IPC를 지원하고 있다. 본 논문에서는 이와 같은 기존의 분산 IPC를 이용하여 보다 더 직관적인 분산 동기화 read/write 인터페이스를 제공하는 객체 기반의 분산 공유 메모리 시스템을 설계/구현하였다. 구현된 분산 공유 메모리는 각 로컬 노드의 캐시 객체의 사용으로 가능한 한 최소한의 네트워크 통신으로 동기화가 가능하도록 설계 구현되었다.

Web-Based Distributed Visualization System for Large Scale Geographic Data (대용량 지형 데이터를 위한 웹 기반 분산 가시화 시스템)

  • Hwang, Gyu-Hyun;Yun, Seong-Min;Park, Sang-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.6
    • /
    • pp.835-848
    • /
    • 2011
  • In this paper, we propose a client server based distributed/parallel system to effectively visualize huge geographic data. The system consists of a web-based client GUI program and a distributed/parallel server program which runs on multiple PC clusters. To make the client program run on mobile devices as well as PCs, the graphical user interface has been designed by using JOGL, the java-based OpenGL graphics library, and sending the information about current available memory space and maximum display resolution the server can minimize the amount of tasks. PC clusters used to play the role of the server access requested geographic data from distributed disks, and properly re-sample them, then send the results back to the client. To minimize the latency happened in repeatedly access the distributed stored geography data, cache data structures have been maintained in both every nodes of the server and the client.

Implementation of a Large-scale Web Query Processing System Using the Multi-level Cache Scheme (계층적 캐시 기법을 이용한 대용량 웹 검색 질의 처리 시스템의 구현)

  • Lim, Sung-Chae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.7
    • /
    • pp.669-679
    • /
    • 2008
  • With the increasing demands of information sharing and searches via the web, the web search engine has drawn much attention. Although many researches have been done to solve technical challenges to build the web search engine, the issue regarding its query processing system is rarely dealt with. Since the software architecture and operational schemes of the query processing system are hard to elaborate, we here present related techniques implemented on a commercial system. The implemented system is a very large-scale system that can process 5-million user queries per day by using index files built on about 65-million web pages. We implement a multi-level cache scheme to save already returned query results for performance considerations, and the multi-level cache is managed in 4-level cache storage areas. Using the multi-level cache, we can improve the system throughput by a factor of 4, thereby reducing around 70% of the server cost.

An Efficient P2P Service using Distributed Caches in MANETs (모바일 애드-혹 망에서 분산 캐시를 이용한 효율적인 P2P 서비스 방법)

  • Oh, Sun-Jin;Lee, Young-Dae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.3
    • /
    • pp.165-171
    • /
    • 2009
  • With rapid growth of Mobile Ad-Hoc network(MANET) and P2P service technologies, many attempts for integration of MANET and P2P service and development of such applications are actively introduced recently. The implementation of stable P2P service, however, is very difficult challenge because of the high mobility of mobile users in MANET. In this paper, we propose an efficient mobile P2P service, which shares and manages multimedia data files efficiently in mobile environment, uses distributed caches to store files considering their popularities in order to achieve high performance. The performance of proposed P2P service is evaluated by an analytic model and compared with those of existing DHT based P2P service in peer-to-peer network.

  • PDF

A Remote Cache Coherence Protocol for Single Shared Memory in Multiprocessor System (단일 공유 메모리를 가지는 다중 프로세서 시스템의 원격 캐시 일관성 유지 프로토콜)

  • Kim, Seong-Woon;Kim, Bo-Gwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.6
    • /
    • pp.19-28
    • /
    • 2005
  • The multiprocessor architecture is a good method to improve the computer system performance. The CC-NUMA provides a single shared space with the physically distributed memories is used widely in the multiprocessor computer system. A CC-NUMA has the full-mapped directory for the shared memory md uses a remote cache memory for tile fast memory access. In this paper, we propose a processing node architecture for a CC-NUMA system and a cache coherency protocol on the physically distributed but logically shared system. We show an implementation result of the system which is adopted the cache coherency protocol.

Histogram Equalization Technique for Content-Aware Load Balancing in Web Sewer Clusters (클러스터 Web 서버 상에서 내용 기반 부하 분산을 위한 히스토그램 균등화 기법)

  • 김종근;홍기호;최황규
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04a
    • /
    • pp.631-633
    • /
    • 2002
  • 본 논문은 대용량 클러스터 기반의 웹 서버 상에서 새로운 내용 기반 부하 분산 기법을 제안한다. 제안된 기법은 웹 서버 로그의 URL 항목에 해시 함수를 적용하여 얻어지는 해시 값에 요청 빈도와 전송될 과일 크기를 누적하여 히스토그램을 생성한다. 그 결과로 생성된 히스토그램의 누적 분포에 히스토그램 균등화 변환함수를 적용하여 각각의 서버 노드에 해시 값에 따라 분포하는 부하를 균등하게 할당할 수 있다. 제안된 부하 분산 기법의 효율성 검증을 위한 시뮬레이션에서 히스토그램 균등화 기법은 서버의 지역적인 캐시 활용과 부하의 균등분산 등의 장점으로 우수한 성능을 나타냄을 보인다.

  • PDF

Content-Aware Load Balancing Technique Based on Histogram Equalization in Web Server Clusters (클러스터 Web 서버 상에서 히스토그램 균등화를 이용한 내용기반 부하분산 기법)

  • Kim, Jong-Geun;Choi, Hwang-Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.369-372
    • /
    • 2002
  • 본 논문은 대용량 클러스터 기반의 웹 서버를 위한 새로운 내용 기반 부하 분산 기법을 제안한다. 제안된 기법은 웹 서버 로그의 URL 항목에 해시 함수를 적용하여 얻어지는 해시 값에 요청 빈도와 전송될 파일 크기를 누적하여 히스토그램을 생성한다. 그 결과로 생성된 히스토그램의 누적 분포에 히스토그램 균등화 변환함수를 적용하여 각각의 서버 노드에 해시 값에 따라 분포하는 부하를 균등하게 할당할 수 있다. 제안된 부하 분산 기법의 효율성 검증을 위한 시뮬레이션에서 히스토그램 균등화 기법은 서버의 지역적인 캐시 활용과 부하의 균등 분산 등의 장점으로 우수한 성능을 나타냄을 보인다.

  • PDF

Performance Evaluation of Catalog Management Schemes for Distributed Main Memory Databases (분산 주기억장치 데이터베이스에서 카탈로그 관리 기법의 성능평가)

  • Jeong, Han-Ra;Hong, Eui-Kyeong;Kim, Myung
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.4
    • /
    • pp.439-449
    • /
    • 2005
  • Distributed main memory database management systems (DMM-DBMSs) store the database in main memories of the participating sites. They provide high performance through fast access to the local databases and high speed communication among the sites. Recently, a lot of research results on DMM- DBMSs has been reported. However, to the best of our knowledge, there is no known research result on the performance of the catalog management schemes for DMM-DBMSs. In this work, we evaluated the performance of the partitioned catalog management schemes through experimental analysis. First, we classified the partitioned catalog management schemes into three categories : Partitioned Catalogs Without Caching (PCWC), Partitioned Catalogs With Incremental Caching (PCWIC), and Partitioned Catalogs With Full Caching (PCWFC). Experiments were conducted by varying the number of sites, the number of terminals per site, buffer size, write query ratio, and local query ratio. Experiments show that PCWFC outperforms the other two schemes in all cases. It also means that the performance of PCWIC gradually increases as time goes by. It should be noted that PCWFC does not guarantee high performance for disk-based distributed DBMSs in cases when the workload of individual site is high, catalog write ratio is high, or remote data objects are accessed very frequently. Main reason that PCWFC outperforms for DMM-DBMSs is that query compilation and remote catalog access can be done in a very high speed, even when the catalogs of the remote data objects are frequently updated.

  • PDF

Improving Search Performance of Tries Data Structures for Network Filtering by Using Cache (네트워크 필터링에서 캐시를 적용한 트라이 구조의 탐색 성능 개선)

  • Kim, Hoyeon;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.6
    • /
    • pp.179-188
    • /
    • 2014
  • Due to the tremendous amount and its rapid increase of network traffic, the performance of network equipments are becoming an important issue. Network filtering is one of primary functions affecting the performance of the network equipment such as a firewall or a load balancer to process the packet. In this paper, we propose a cache based tri method to improve the performance of the existing tri method of searching for network filtering. When several packets are exchanged at a time between a server and a client, the tri method repeats the same search procedure for network filtering. However, the proposed method can avoid unnecessary repetition of search procedure by exploiting cache so that the performance of network filtering can be improved. We performed network filtering experiments for the existing method and the proposed method. Experimental results showed that the proposed method could process more packets up to 790,000 per second than the existing method. When the size of cache list is 11, the proposed method showed the most outstanding performance improvement (18.08%) with respect to memory usage increase (7.75%).

Distributed Cache Framework and its Data Procurement Algorithm on In-Memory Data Grid (메모리기반 데이터 그리드 환경에서 확장성을 고려한 분산 캐시 구조 및 데이터 조달 기법)

  • Kim, Byung-Sang;Youn, Chan-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.1767-1769
    • /
    • 2010
  • 본 논문은 그리드 혹은 클라우드 컴퓨팅환경과 같은 인터넷 기반의 대규모 분산 환경에서 데이터집약적인 작업의 실행에 있어서 확장성을 위해 필수적으로 고려되는 데이터 전송 부하를 분산시키는 기법을 논하고 있다. 우리는 다수의 메모리기반의 데이터 노드를 활용하여 분할기법(Partitioning)을 기반으로 데이터 전송 부하를 줄이고자 하며 다수의 데이터 노드에 실시간으로 최적의 데이터의 양을 공급하는 기법에 대한 이론적인 분석과 시뮬레이션을 통한 성능 검증을 포함하고 있다.