• 제목/요약/키워드: (Network cache

Search Result 270, Processing Time 0.025 seconds

Adaptive Deadline-aware Scheme (ADAS) for Data Migration between Cloud and Fog Layers

  • Khalid, Adnan;Shahbaz, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1002-1015
    • /
    • 2018
  • The advent of Internet of Things (IoT) and the evident inadequacy of Cloud networks concerning management of numerous end nodes have brought about a shift of paradigm giving birth to Fog computing. Fog computing is an extension of Cloud computing that extends Cloud resources at the edge of the network, closer to the user. Cloud computing has become one of the essential needs of people over the Internet but with the emerging concept of IoT, traditional Clouds seem inadequate. IoT entails extremely low latency and for that, the Cloud servers that are distant and unknown to the user appear to be unsuitable. With the help of Fog computing, the Fog devices installed would be closer to the user that will provide an immediate storage for the frequently needed data. This paper discusses data migration between different storage types especially between Cloud devices and then presents a mechanism to migrate data between Cloud and Fog Layer. We call this mechanism Adaptive Deadline-Aware Scheme (ADAS) for Data migration between Cloud and Fog. We will demonstrate that we can access and process latency sensitive "hot" data through the proposed ADAS more efficiently than with a traditional Cloud setup.

A Study on Caching Methods in Client-Server Systems for Mobile GIS (모바일 GIS를 위한 클라이언트-서버 시스템에서 캐슁기법 연구)

  • 김진덕;김미란;최진오
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.201-204
    • /
    • 2002
  • Although the reuse of the cached data for scrolling the map reduces the amount of passed data between client and server, it needs the conversions of data coordinates, selective deletion of objects and cache compaction at client. The conversion is time intensive operation due to limited resources of mobile phones such as low computing power, small memory. Therefore, for the efficient map control in the vector map service based mobile phone, it is necessary to study the method for reducing wireless network bandwidth and for overwhelming the limited resources of mobile phone as well. This paper proposes the methods for caching pre-received spatial objects in client-server systems for mobile GIS. We also analyze the strengths and drawbacks between the reuse of cached data and transmission of raw data respectively.

  • PDF

An Adaptive TTL Allocation Scheme for Live and On-Demand Personal Broadcasting Service (실시간 라이브 및 비실시간 개인방송 서비스를 위한 적응적 TTL 할당기법)

  • Kim, Namtae;You, Dongho;Jang, Jungyup;Seo, Bong-seok;Jeong, Eun-young;Kim, Dong Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.45-46
    • /
    • 2016
  • 본 논문은 CDN(Content Delivery Network)의 동적 캐싱 방식을 기반으로 하는 적응적 TTL(Time-To-Live) 할당기법을 제안한다. 이는 클라이언트가 실시간 개인방송을 시청하는 중 지나간 과거의 특정 장면을 다시 시청할 때 근원(Origin)서버의 부하를 효율적으로 줄일 수 있을 뿐만 아니라 캐시(Cache)서버의 저장 공간도 효율적으로 사용할 수 있는 장점이 있다. 따라서 본 논문에서 제안하는 적응적 TTL 할당기법은 개인방송 시청자들이 지나간 과거의 영상들을 선택적으로 시청할 때 보다 나은 서비스를 제공할 수 있을 것으로 기대된다.

  • PDF

Design and Implementation of MPOA using SDL (SDL을 이용한 MPOA 설계 및 구현)

  • Lim, Ji-Young;Kim, Hee-Jung;Lim, Soo-Jung;Chae, Ki-Joon;Lee, Mee-Jung;Choi, Kil-Young;Kang, Hun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.6
    • /
    • pp.643-656
    • /
    • 2000
  • MPOA proposed and standardized by the ATM Forum is a protocol tllat provides effective bridging and routing for ATM networks in a diverse network environment. Its plimary goal is to transfer unicast data effectively among the subnets. In this paper, MPOA components are implemented using the SDL(Specification and Description Language) which the ITU has standardized for the development of communication systems. In addition, MPOA procedures for various operations such as address translation for packets from upper layers, Ingress/Egress cache management and shortcut configuration, are examined with tlle help of the SDT(SDL Design Too]) simulator.

  • PDF

A Deep Learning Approach for Identifying User Interest from Targeted Advertising

  • Kim, Wonkyung;Lee, Kukheon;Lee, Sangjin;Jeong, Doowon
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.245-257
    • /
    • 2022
  • In the Internet of Things (IoT) era, the types of devices used by one user are becoming more diverse and the number of devices is also increasing. However, a forensic investigator is restricted to exploit or collect all the user's devices; there are legal issues (e.g., privacy, jurisdiction) and technical issues (e.g., computing resources, the increase in storage capacity). Therefore, in the digital forensics field, it has been a challenge to acquire information that remains on the devices that could not be collected, by analyzing the seized devices. In this study, we focus on the fact that multiple devices share data through account synchronization of the online platform. We propose a novel way of identifying the user's interest through analyzing the remnants of targeted advertising which is provided based on the visited websites or search terms of logged-in users. We introduce a detailed methodology to pick out the targeted advertising from cache data and infer the user's interest using deep learning. In this process, an improved learning model considering the unique characteristics of advertisement is implemented. The experimental result demonstrates that the proposed method can effectively identify the user interest even though only one device is examined.

Distributed In-Memory Caching Method for ML Workload in Kubernetes (쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법)

  • Dong-Hyeon Youn;Seokil Song
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In this paper, we analyze the characteristics of machine learning workloads and, based on them, propose a distributed in-memory caching technique to improve the performance of machine learning workloads. The core of machine learning workload is model training, and model training is a computationally intensive task. Performing machine learning workloads in a Kubernetes-based cloud environment in which the computing framework and storage are separated can effectively allocate resources, but delays can occur because IO must be performed through network communication. In this paper, we propose a distributed in-memory caching technique to improve the performance of machine learning workloads performed in such an environment. In particular, we propose a new method of precaching data required for machine learning workloads into the distributed in-memory cache by considering Kubflow pipelines, a Kubernetes-based machine learning pipeline management tool.

  • PDF

The Effect of Mesh Interconnection Network on the Performance of Manycore System. (다중코어 시스템의 메쉬구조 상호연결망이 성능에 미치는 영향)

  • Kim, Han-Yee;Kim, Young-Hwan;Suh, Taeweon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.116-119
    • /
    • 2011
  • 다중코어(Many-Core) 시스템은 많은 코어들이 상호연결망을 통해서 연결되어있는 시스템으로, 단일코어나 멀티코어 시스템에 비해 보다 많은 병렬 컴퓨팅 자원을 지원한다. Amdahl 의 법칙에 의하면 병렬화되어 처리하는 부분은 이론적으로 프로세서의 개수에 비례하게 가속화 될 수 있지만, 상호연결망에서의 전송 지연을 비롯한 많은 요인에 의해서 성능의 가속화가 저해된다. 특히 캐시 일관성 규약(Cache Coherence Protocol)을 지원하는 대부분의 다중코어 시스템에서는 병렬화를 함에 있어서 캐시 미스로 인해 발생하는 데이터의 전송 지연이 성능에 많은 영향을 미칠 수 있다. 따라서 효과적인 병렬 프로그램을 위해서는 캐시 구조에 대한 이해를 바탕으로 상호연결망에 대한 연구가 필요하다. 본 논문에서는 메쉬(Mesh) 구조의 64 코어 다중코어 시스템인 TilePro64 를 이용하여 상호연결망의 데이터 전송 지연에 따른 프로그램 성능의 민감도를 측정하였다. 결과적으로 코어간 거리(Hop)가 늘어날수록 작업의 수행시간이 평균적으로 4.27%씩 선형적으로 증가하는 관계가 있는 것으로 나타났다.

Server network architectures for VOD service (프록시 서버를 이용한 DAVIC VOD 시스템의 설계)

  • Ahn, Kyung-Ah;Choi, Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.5
    • /
    • pp.1229-1240
    • /
    • 1998
  • In this paper, we provide a design of DAVIC VOD service system with proxy servers which perform caching of video streams. Proxy servers are placed between a service provider system and service consumer systems. They provide video services to consumers on behalf of the service provider, therefore they reduce the loads of service providers and network. The operation of a proxy server depends on whether the requested program is in its storage. If this is the case, the prosy servere takes all the controls, but if the proxy does not have the program, it forwards the service request the proxy server takes all the controls, but if the prosy does not have the program, it forwards the service request to a service provider. While the service provider system provides the program to the consumer, the proxy copies and caches the program. The proxy server executes cache replacement, if necessary. We show by simultion that the LFU is the most efficiency caching replacement algorithm among the typical algorithms such as LRU, LFU, FIFO.

  • PDF

An Efficient Cooperative Web Caching Scheme (효율적인 협동적 웹캐슁 기법)

  • Shin, Yong-Hyeon
    • The KIPS Transactions:PartC
    • /
    • v.13C no.6 s.109
    • /
    • pp.785-794
    • /
    • 2006
  • Nowadays, Internet is used worldwide and network traffic is increasing dramatically. Much of Internet traffic is due to the web applications. And I propose a new cooperative web caching scheme, called DCOORD which tries to minimize the overall cost of Web caching. DCOORD reduces the communication cost by coordinating the objects which are cached at each cache server. In this paper, I compare the Performance of DCOORD with two well-known cooperative Web caching schemes, ICP and CARP, using trace driven simulation. In order to reflect the cost factor in the network communication, I used the CSR(Cost-Saving Ratio) as our performance metric, instead of the traditional hit ratio. The performance evaluations show that DCOORD is more cost effective than ICP and CARP.

P2Prefix : Efficient Broadcasting Streaming Scheme Based on P2P Caching (P2Prefix : P2P 캐싱 기반의 효율적인 브로드캐스트 스트리밍 기법)

  • Lee, Chi-Hun;Choi, Young;Choi, Hwang-Kyu
    • Journal of Internet Computing and Services
    • /
    • v.8 no.2
    • /
    • pp.77-87
    • /
    • 2007
  • A typical VOD service allows that a number of remote clients playback a desired video from a large collection of videos stored in one or more video servers. The main bottleneck for a VOD service is the network bandwidth connecting to the VOD server to the client due to the high bandwidth requirements. Many previous researches have shown that VOD server can be greatly improved through the use of multicast, broadcast, or P2P scheme. Broadcast is one of the most efficient techniques because it can transmit a stream to many users without additional network bandwidth. But the broadcast has long latency time. In order to overcome the drawback, in this paper, we propose P2Prefix broadcast scheme that can solve the service latency time, which is the problem of broadcast scheme, by using P2P caching as well as minimizing the client buffer requirement.

  • PDF