• Title/Summary/Keyword: File Caching

Search Result 52, Processing Time 0.02 seconds

SPARQL Query Processing in Distributed In-Memory System (분산 메모리 시스템에서의 SPARQL 질의 처리)

  • Jagvaral, Batselem;Lee, Wangon;Kim, Kang-Pil;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1109-1116
    • /
    • 2015
  • In this paper, we propose a query processing approach that uses the Spark functional programming and distributed memory system to solve the computational overhead of SPARQL. In the semantic web, RDF ontology data is produced at large scale, and the main challenge for the semantic web is to query and manipulate such a large ontology with a high throughput. The most existing studies on SPARQL have focused on deploying the Hadoop MapReduce framework, and although approaches based on Hadoop MapReduce have shown promising results, they achieve a low level of throughput due to the underlying distributed file processes. Therefore, in order to speed up the query processes, we suggest query- processing methods that are based on memory caching in distributed memory system. Our approach is also integrated with a clause unification method for propagating between the clauses that exploits Spark join, map and filter methods along with caching. In our experiments, we have achieved a high level of performance relative to other approaches. In particular, our performance was nearly similar to that of Sempala, which has been considered to be the fastest query processing system.

Node ID-based Service Discovery for Mobile Ad Hoc Networks (모바일 애드-혹 네트워크를 위한 노드 ID 기반 서비스 디스커버리 기법)

  • Kang, Eun-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.109-117
    • /
    • 2009
  • In this paper, we propose an efficient service discovery scheme that combines peer-to-peer caching advertisement and node ID-based selective forwarding service requests. P2P caching advertisement quickly spreads available service information and reduces average response hop count since service information store in neighbor node cache. In addition, node ID-based service requests can minimize network transmission delay and can reduce network load since do not broadcast to all neighbor node. Proposed scheme does not require a central lookup server or registry and not rely on flooding that create a number of transmission messages. Simulation results show that proposed scheme improved network loads and response times since reduce a lot of messages and reduce average response hop counts using adaptive selective nodes among neighbor nodes compared to traditional flooding-based protocol.

Web3.0 Video Streaming Platform from the Perspective of Technology, Tokenization & Decentralized Autonomous Organization

  • Song, Minzheong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.149-160
    • /
    • 2024
  • For examining Web3.0 video streaming (VS) platforms in terms of the decentralized technology, tokenization and decentralized autonomous organization (DAO), we look at four platforms like DLive, DTube, Livepeer, and Theta Network (Theta). As a result, DLive which firstly partnered with Medianova for CDN and with Theta for peer to peer (P2P) network and migrates to Tron blockchain (BC), receives no commission from what creators earn, gives rewards to viewers by measuring engagement, and incentivizes participation by allowing 20% of donation & fees for funding development, 5% to BitTorrent Token (BTT) stakeholders (among these 5%, 20% to partners, 80% to other BTT stakeholders). DTube on its own lower-layer BC, Avalon, offers InterPlanetary File System (IPFS), gives 90% of the created value to creators or curators, and try to empower the community. Livepeer on Ethereum BC offers decentralized CDN, P2P, gives Livepeer Token (LPT) as incentive for network participants, and delegators can stake their LPT to orchestrators doing good. Theta on its native BC pulls streams from peering caching nodes, creates P2P network, gives Theta utility token, TFUEL for caching or relay nodes contributors, and allows Theta governance token, THETA as staking token. We contribute to the categorization of Web3.0 VS platforms: DLive and DTube reduce the risk of platform censorship, promote the diverse content, and allow the community to lead to more user-friendly environments. On the other hand, Livepeer and Theta provide new methods to stream content, but they have some differences. Whereas Livepeer focuses on the transcoding layer, Theta concentrates both on the video application layer and content delivery layer. It means, Theta tries to deliver value to all participants by enhancing network quality, reducing CDN cost, and rewarding users in utility tokens for the storage and bandwidth they provide.

Prefetching Mechanism using the User's File Access Pattern Profile in Mobile Computing Environment (이동 컴퓨팅 환경에서 사용자의 FAP 프로파일을 이용한 선인출 메커니즘)

  • Choi, Chang-Ho;Kim, Myung-Il;Kim, Sung-Jo
    • Journal of KIISE:Information Networking
    • /
    • v.27 no.2
    • /
    • pp.138-148
    • /
    • 2000
  • In the mobile computing environment, in order to make copies of important files available when being disconnected the mobile host(client) must store them in its local cache while the connection is maintained. In this paper, we propose the prefetching mechanism for the client to save files which may be accessed in the near future. Our mechanism utilizes analyzer, prefetch-list producer, and prefetch manager. The analyzer records file access patterns of the user in a FAP(File Access Patterns) profile. Using the profile, the prefetch-list producer creates the prefetch-list. The prefetch manager requests a file server to return this list. We set the parameter TRP(Threshold of Reference Probability) to ensure that only reasonably related files can be prefetched. The prefetch-list producer adds the files to a prefetch-list if their reference probability is greater than the TRP. We also use the parameter TACP(Threshold of Access Counter Probability) to reduce the hoarding size required to store a prefetch-list. Finally, we measure the metrics such as the cache hit ratio, the number of files referenced by the client after disconnection and the hoarding size. The simulation results show that the performance of our mechanism is superior to that of the LRU caching mechanism. Our results also show that prefetching with the TACP can reduce the hoard size while maintaining similar performance of prefetching without TACP.

  • PDF

Storage Schemes for XML Query Cache (XML 질의 캐쉬의 저장 기법)

  • Kim, Young-Hyun;Kang, Hyun-Chul
    • Journal of KIISE:Databases
    • /
    • v.33 no.5
    • /
    • pp.551-562
    • /
    • 2006
  • XML query caching for XML database-backed Web applications began to be investigated recently. Despite its practical significance, efficiency of the storage schemes for cached query results has not been addressed. In this paper, we deal with the storage schemes for XML query cache. A fundamental problem that needs to be considered in designing an efficient storage structure for XML query cache is that there exist performance tradeoffs between the two major types of operations on a cached query result. The two are (1) retrieving the whole of it to return the query result and (2) updating just a small portion of it for its incremental refresh against the updates done to its source. We propose eight different storage schemes for XML query cache, which are categorized into three groups: (1) the schemes based on the plain text file, (2) the schemes based on the persistent DOM (PDOM) file, and (3) a scheme employing an RDBMS. We implemented all of them, and compared their performance with each other. We also compared our proposal with a storage scheme based on a current state-of-the-art XML storage scheme, showing that ours is more efficient.

DNS-based Dynamic Load Balancing Method on a Distributed Web-server System (분산 웹 서버 시스템에서의 DNS 기반 동적 부하분산 기법)

  • Moon, Jong-Bae;Kim, Myung-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.193-204
    • /
    • 2006
  • In most existing distributed Web systems, incoming requests are distributed to servers via Domain Name System (DNS). Although such systems are simple to implement, the address caching mechanism easily results in load unbalancing among servers. Moreover, modification of the DNS is necessary to load considering the server's state. In this paper, we propose a new dynamic load balancing method using dynamic DNS update and round-robin mechanism. The proposed method performs effective load balancing without modification of the DNS. In this method, a server can dynamically be added to or removed from the DNS list according to the server's load. By removing the overloaded server from the DNS list, the response time becomes faster. For dynamic scheduling, we propose a scheduling algorithm that considers the CPU, memory, and network usage. We can select a scheduling policy based on resources usage. The proposed system can easily be managed by a GUI-based management tool. Experiments show that modules implemented in this paper have low impact on the proposed system. Furthermore, experiments show that both the response time and the file transfer rate of the proposed system are faster than those of a pure Round-Robin DNS.

Real-time Image Scanning System for Detecting Tunnel Cracks Using Linescan Cameras

  • Jeong, Dong-Hyun;Kim, Young-Rin;Cho, I-Sac;Kim, Eun-Ju;Lee, Kang-Moon;Jin, Kwang-Won;Song, Chang-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.726-736
    • /
    • 2007
  • In this paper, real-time image scanning system using linescan cameras is designed. The system is specially designed to diagnose and analyse the conditions of tunnels such as crack widths through the captured images. The system consists of two major parts, the image acquisition system and the image merging system. To save scanned image data into storage media in real-time, the image acquisition system has been designed with two different control and management modules. The control modules are in charge of controlling the hardware device and the management modules handle system resources so that the scanned images are safely saved to the magnetic storage devices. The system can be mounted to various kinds of vehicles. After taking images, the image merging system generates extended images by combining saved images. Several tests are conducted in laboratory as well as in the field. In the laboratory simulation, both systems are tested several times and upgraded. In the field-testing, the image acquisition system is mounted to a specially designed vehicle and images of the interior surface of the tunnel are captured. The system is successfully tested in a real tunnel with a vehicle at the speed of 20 km/h. The captured images of the tunnel condition including cracks are vivid enough for an expert to diagnose the state of the tunnel using images instead of seeing through his/her eyes.

  • PDF

Design of Web Content Update Algorithm to Reduce Communication Data Consumption using Service Worker and Hash (서비스워커와 해시를 이용한 통신 데이터 소모 감소를 위한 웹 콘텐츠 갱신 알고리즘 설계)

  • Kim, Hyun-gook;Park, Jin-tae;Choi, Moon-Hyuk;Moon, Il-young
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.2
    • /
    • pp.158-165
    • /
    • 2019
  • The existing web page was downloaded and provided to the user every time the user requested the page. Therefore, if the same page is repeatedly requested by the user, only the download for the same resource is repeated. This is a factor that causes unnecessary consumption of data. We focus on reducing data consumption caused by unnecessary requests between users and servers, and improving content delivery speed. Therefore, in this paper, we propose a caching system and an algorithm that can reduce the data consumption while maintaining the latest cache by comparing the hash value using the hash function that can detect the change of the file requested by the user.

Analysis and Advice on Cache Algorithms of SSD FTL (SSD FTL 캐시 알고리즘 분석 및 제언)

  • Hyung Bong, Lee;Tae Yun, Chung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.1
    • /
    • pp.1-8
    • /
    • 2023
  • It is impossible to overwrite on an already allocated page in SSDs, so whenever a write operation occurs a page replacement with a clean page is required. To resolve this problem, SSDs have an internal flash translation layer called FTL that maps logical pages managed by a file system of operating system to currently allocated physical pages. SSD pages discarded due to write operations must be recycled through initialization, but since the number of initialization times is limited the FTL provides a caching function to reduce the number of writes in addition to the page mapping function, which is a core function. In this study, we focus on the FTL cache methodologies reducing the number of page writes and analyze the related algorithms, and propose a write-only cache strategy. As a result of experimenting with the write-only cache using a simulator, it showed an improvement of up to 29%.

Boosting WiscKey Key-Value Store Using NVDIMM-N (NVDIMM-N을 활용한 WiscKey 키-밸류 스토어 성능 향상)

  • Il Han Song;Bo hyun Lee;Sang Won Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.3
    • /
    • pp.111-116
    • /
    • 2023
  • The WiscKey database, which optimizes overhead by compaction of the LSM tree-based Key-Value database, stores the value in a separate file, and stores only the key and value addresses in the database. Each time an fsync system call function is used to ensure data integrity in the process of storing values. In previously conducted studies, workload performance was reduced by up to 5.8 times as a result of performing the workload without calling the fsync system call function. However, it is difficult to ensure the data integrity of the database without using the fsync system call function. In this paper, to reduce the overhead of the fsync system call function while performing workloads on the WiscKey database, we use NVDIMM caching techniques to ensure data integrity while improving the performance of the WiscKey database.