• Title/Summary/Keyword: 사용자 요청 패턴

Search Result 55, Processing Time 0.024 seconds

Q+R Tree based Pub-Sub System for Mobile Users (모바일 사용자를 위한 Q+R 트리 기반 퍼브-서브 시스템)

  • Lee, Myung-Guk;Kim, Kyungbaek
    • Smart Media Journal
    • /
    • v.4 no.3
    • /
    • pp.9-15
    • /
    • 2015
  • A pub(lish)/sub(scribe) system is a data forwarding system which forwards only interesting data among the whole published data, which is related to the subscriptions registered by end users. Classical pub/sub systems are realized by constructing a network of brokers which are responsible for storing or forwarding data. Along with the substantial increase of the population mobile users, it is required that the pub/sub system handles the subscriptions of user locations which changes continuously and frequently. In this paper, a new broker network based pub/sub system which efficiently handles the frequent changes of subscriptions related to user locations is proposed. In consideration of moving patterns of users and geographical property, the proposed pub/sub system categorize the entire data space into Slow Moving Region and Normal Moving Region, and manages the brokers which are responsible for these regions by using Q+R tree in order to handle user requests more efficiently. Through the extensive simulation, it is presented that the proposed Q+R tree based pub/sub system can reduce unnecessary needs of brokers and network traffic and can support the dynamic subscription related to user location.

A Study of Web Cashing & World Wide Web (웹 상에서의 캐싱기법에 관한 연구)

  • Na, Jong-Won
    • Annual Conference of KIPS
    • /
    • 2006.11a
    • /
    • pp.191-194
    • /
    • 2006
  • 정보 공유를 위한 거대한 정보 시스템인 월드 와이드 웹의 폭발적인 증가는 기존 네트워크의 트래픽을 유발시키고 서버의 부하를 가져오는 가장 큰 원이니 되고 있다. 많은 요청으로 인하여 원활한 서비스를 제공하기 어렵고, 또한 만족할만한 응답시간을 보장받지 못하고 있다. 이러한 병목현상의 해결방안으로 주목받고 있는 것이 웹 캐시이다. 본 논문에서는 웹 캐싱기법의 기본적인 기술 측면과 다양한 캐싱기법들을 분석해 보고, 보다 효율적인 캐시 시스템을 구현하기 위한 캐시 시스템을 제안한다. 동일한 네티워크 내에서 일정 수준 이상의 접속 회수를 보여주는 웹 객체를 우선적으로 선반입 하는 사용자 액세스 패턴을 이용한 캐시 시스템을 구현하여 본다. 또한 동일한 네트워크 내에서 일반적인 캐싱 알고리즘을 채용한 캐시와 실험 결과를 비교하여 평가한다.

  • PDF

Design of a High Performance Backup Application using Cloud Storage (클라우드 저장소를 사용하는 고성능 백업 애플리케이션 설계)

  • Yang, Shinhyung;Park, Min Gyun;Lee, Jae Yoo;Kim, Soo Dong
    • Annual Conference of KIPS
    • /
    • 2013.11a
    • /
    • pp.1576-1579
    • /
    • 2013
  • 클라우드 저장소 서비스는 특정한 장비나 저장 공간의 제약 사항 없이, 언제 어디서나 신뢰성 높은 서버를 활용하여 사용자들에게 다양한 편의를 제공함으로써 사용량이 급증하고 있다. 더불어, 저장 데이터 요청 빈도, 저장 데이터의 크기, 파일 구조 복잡도의 증가로 인해 오버헤드의 발생에 따른 성능 하락에 관한 이슈가 제기된다. 본 논문에서는 클라우드 백업 애플리케이션의 성능 향상을 위해 컴포지트 패턴 기반의 백업 데이터 관리 기법과 동적 자원 할당 기법으로 구성된 설계 모델을 제안한다. 또한, 실사례의 적용을 통해 본 논문에서 제안하는 설계 모델의 실효성을 검증한다.

Social-relation Aware Routing Protocol in Mobile Ad hoc Networks (이동 애드 혹 네트워크를 위한 사회적 관계 인식 라우팅 프로토콜)

  • An, Ji-Sun;Ko, Yang-Woo;Lee, Dong-Man
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.8
    • /
    • pp.798-802
    • /
    • 2008
  • In this paper, we consider mobile ad hoc network routing protocols with respect to content sharing applications. We show that by utilizing social relations among participants, our routing protocols can improve its performance and efficiency of caching. Moreover, in certain situation which we can anticipate pattern of user's content consumption, our scheme can help such applications be more efficient in terms of access time and network overhead. Using NS2 simulator, we compare our scheme to DSDV and routing protocol using shortest path algorithm.

Web Prefetching Scheme for Efficient Internet Bandwidth Usage (효율적인 인터넷 대역폭 사용을 위한 웹 프리페칭 기법)

  • Kim, Suk-Hyang;Hong, Won-Gi
    • Journal of KIISE:Information Networking
    • /
    • v.27 no.3
    • /
    • pp.301-314
    • /
    • 2000
  • As the number of World Wide Web (Web) users grows, Web traffic continues to increase at an exponential rate. Currently, Web traffic is one of the major components of Internet traffic. Also, high bandwodth usage due to Web traffic is observed during peak periods while leaving bandwidth usage idle during off-peak periods. One of the solutions to reduce Web traffic and speed up Web access is through the use of Web caching. Unfortunately, Web caching has limitations for reducing network bandwidth usage during peak periods. In this paper, we focus our attention on the use of a prefetching algorithm for reducing bandwidth during peak periods by using off-peak period bandwidth. We propose a statistical, batch, proxy-side prefetching scheme that improves cache hit rate while only requiring a small amount of storage. Web objects that were accessed many times in previous 24 hours but would be expired in the next 24 hours, are selected and prefetched in our scheme. We present simulation results based on Web proxy and show that this prefetching algorithm can reduce peak time bandwidth using off-peak bandwidth.

  • PDF

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

Design of Compound Knowledge Repository for Recommendation System (추천시스템을 위한 복합지식저장소 설계)

  • Han, Jung-Soo;Kim, Gui-Jung
    • Journal of Digital Convergence
    • /
    • v.10 no.11
    • /
    • pp.427-432
    • /
    • 2012
  • The article herein suggested a compound repository and a descriptive method to develop a compound knowledge process. A data target saved in a compound knowledge repository suggested in this article includes all compound knowledge meta data and digital resources, which can be divided into the three following factors according to the purpose: user roles, functional elements, and service ranges. The three factors are basic components to describe abstract models of repository. In this article, meta data of compound knowledge are defined by being classified into the two factors. A component stands for the property about a main agent, activity unit or resource that use and create knowledge, and a context presents the context in which knowledge object are included. An agent of the compound knowledge process performs classification, registration, and pattern information management of composite knowledge, and serves as data flow and processing between compound knowledge repository and user. The agent of the compound knowledge process consists of the following functions: warning to inform data search and extraction, data collection and output for data exchange in an distributed environment, storage and registration for data, request and transmission to call for physical material wanted after search of meta data. In this article, the construction of a compound knowledge repository for recommendation system to be developed can serve a role to enhance learning productivity through real-time visualization of timely knowledge by presenting well-put various contents to users in the field of industry to occur work and learning at the same time.

Failure Restoration of Mobility Databases by Learning and Prediction of User Mobility in Mobile Communication System (이동 통신 시스템에서 사용자 이동성의 학습과 예측에 의한 이동성 데이타베이스의 실채 회복)

  • Gil, Joon-Min;Hwang, Chong-Sun;Jeong, Young-Sik
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.4
    • /
    • pp.412-427
    • /
    • 2002
  • This paper proposes a restoration scheme based on mobility learning and prediction in the presence of the failure of mobility databases in mobile communication systems. In mobile communication systems, mobility databases must maintain the current location information of users to provide a fast connection for them. However, the failure of mobility databases may cause some location information to be lost. As a result, without an explicit restoration procedure, incoming calls to users may be rejected. Therefore, an explicit restoration scheme against the failure of mobility databases is needed to guarantee continuous service availability to users. Introducing mobility learning and prediction into the restoration process allows systems to locate users after a failure of mobility databases. In failure-free operations, the movement patterns of users are learned by a Neuro-Fuzzy Inference System (NFIS). After a failure, an inference process of the NFIS is initiated and the users' future location is predicted. This is used to locate lost users after a failure. This proposal differs from previous approaches using checkpoint because it does not need a backup process nor additional storage space to store checkpoint information. In addition, simulations show that our proposal can reduce the cost needed to restore the location records of lost users after a failure when compared to the checkpointing scheme

Energy-Efficient Mobility Management Schemes in HMIPv6 (HMIPv6환경에서 에너지 효율적인 이동성 관리 기법)

  • Yang Sun Ok;Kim SungSuk;Hwang Chong-Sun
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.5
    • /
    • pp.615-624
    • /
    • 2005
  • In Mobile IP, several types of messages - binding update, binding request and binding acknowledgement - are used to support user mobility. It is necessary to exchange those messages frequently for seamless mobility but it incurs both the increase of network overhead and poor usage of mobile node battery power Thus, we need a mechanism that the server detects users location and also copes with the problems effectively, which is our main concern in this paper Each user records all moving logs locally and periodically makes out profile based on them in HMIPv6. By using profile, estimated resident time can be computed whenever he enters an area and the time is set up as the binding update message lifetime. Of course, the more correct lifetime nay be obtained IP arrival time as well as average resident time Is considered in profile. Through extensive experiments, we measure the bandwidth usage for binding update messages by comparing the proposed schemes with that in HMIPv6. From the results, Gain gets over $80\%$ when mobile node stays more than 13 minutes in a subnet. Namely, we come to know that our schemes improve network usage and energy usage in mobile node by decreasing the number of messages while they also manage users locations like that in HMIPv6.

High-Volume Data Processing using Complex Event Processing Engine in the Web of Next Generation (차세대 웹 환경에서 Complex Event Processing 엔진을 이용한 대용량데이터 처리)

  • Kang, Man-Mo;Koo, Ra-Rok;Lee, Dong-Hyung
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.300-307
    • /
    • 2010
  • According to growth of web, data processing technology is developing. In the Web of next generation, high-speed or high-volume data processing technologies for various wire-wireless users, USN and RFID are developing too. In this paper, we propose a high-volume data processing technology using Complex Event Processing(CEP) engine. CEP is the technology to process complex events. CEP Engine is the following characteristics. First it collects a high-volume event(data). Secondly it analyses events. Finally it lets event connect to new actions. In other words, CEP engine collects, analyses, filters high-volume events. Also it extracts events using pattern-matching for registered events and new events. As the results extracted. We use it by an input event of other work, real-time response for demanded event and can trigger to database for only valid data.