• Title/Summary/Keyword: information storage

Search Result 4,415, Processing Time 0.028 seconds

Proxy Caching Scheme Based on the User Access Pattern Analysis for Series Video Data (시리즈 비디오 데이터의 접근 패턴에 기반한 프록시 캐슁 기법)

  • Hong, Hyeon-Ok;Park, Seong-Ho;Chung, Ki-Dong
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1066-1077
    • /
    • 2004
  • Dramatic increase in the number of Internet users want highly qualified service of continuous media contents on the web. To solve these problems, we present two network caching schemes(PPC, PPCwP) which consider the characteristics of continuous media objects and user access pattern in this paper. While there are plenty of reasons to create rich media contents, delivering this high bandwidth contents over the internet presents problems such as server overload, network congestion and client-perceived latency. PPC scheme periodically calculates the popularity of objects based on the playback quantity and determines the optimal size of the initial fraction of a continuous media object to be cached in proportion to the calculated popularity. PPCwP scheme calculates the expected popularity using the series information and prefetches the expected initial fraction of newly created continuous media objects. Under the PPCwP scheme, the initial client-perceived latency and the data transferred from a remote server can be reduced and limited cache storage space can be utilized efficiently. Trace-driven simulation have been performed to evaluate the presented caching schemes using the log-files of iMBC. Through these simulations, PPC and PPCwP outperforms LRU and LFU in terms of BHR and DSR.

  • PDF

Development of a Metamodel-Based Healthcare Service System using OSGi Component Platform (OSGi 컴포넌트 플랫폼을 이용한 메타모델 기반의 건강관리 서비스 시스템 개발)

  • Kim, Tae-Woong;Kim, Hee-Cheol
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.121-132
    • /
    • 2011
  • A healthcare system is a type of medical information system that performs early detection and prevention in diseases by checking one's health condition periodically. Such a healthcare system is based on the signal obtained from the body. However, the developed existing system represents certain differences in the storage and description of vital signs according to medicare devices and the evaluation method of the system. It brings some disadvantages, such as lacks in the interoperability between systems, increases in the development cost of systems, and absence of a unified system. Thus, this study develops a healthcare system based on a meta model. For establishing this objective, this study describes and stores vital sign data based on the standard meta model of HL7 and applies OCL, which is a mathematical specification language, for defining wellness indexes and extracting data in order to evaluate health risk appraisals in health. In addition, this study implements components based on OSGi and assemble them in order to easily extend various devices and systems. By describing vital data based on the meta model, it represents some advantages that it makes possible to ensure the interoperability between systems and introduce the standardization of the evaluation method of health conditions through defining the wellness index using OCL. Also, it provides dear specifications.

Efficient and Privacy-Preserving Near-Duplicate Detection in Cloud Computing (클라우드 환경에서 검색 효율성 개선과 프라이버시를 보장하는 유사 중복 검출 기법)

  • Hahn, Changhee;Shin, Hyung June;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1112-1123
    • /
    • 2017
  • As content providers further offload content-centric services to the cloud, data retrieval over the cloud typically results in many redundant items because there is a prevalent near-duplication of content on the Internet. Simply fetching all data from the cloud severely degrades efficiency in terms of resource utilization and bandwidth, and data can be encrypted by multiple content providers under different keys to preserve privacy. Thus, locating near-duplicate data in a privacy-preserving way is highly dependent on the ability to deduplicate redundant search results and returns best matches without decrypting data. To this end, we propose an efficient near-duplicate detection scheme for encrypted data in the cloud. Our scheme has the following benefits. First, a single query is enough to locate near-duplicate data even if they are encrypted under different keys of multiple content providers. Second, storage, computation and communication costs are alleviated compared to existing schemes, while achieving the same level of search accuracy. Third, scalability is significantly improved as a result of a novel and efficient two-round detection to locate near-duplicate candidates over large quantities of data in the cloud. An experimental analysis with real-world data demonstrates the applicability of the proposed scheme to a practical cloud system. Last, the proposed scheme is an average of 70.6% faster than an existing scheme.

Fresh Kills Park Design, Staten Island, New York (프레쉬 킬스 공원 조경설계)

  • Jeong Wook-Ju;James Corner
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.33 no.1 s.108
    • /
    • pp.93-108
    • /
    • 2005
  • Fresh Kills is the largest landfill in the world located in the west side of Staten Island, New York. The landfill served as a storage area for New York City's trash for more than 50 years. After years of civilian and political pressure, state and local legislation decided its closure of landfill operation in Fresh Kills in March 2001. Soon after, Department of City Planning announced a Fresh Kills international design com-petition: 'Landfill to Landscape'. The winning entry was promised to be outline for the redevelopment of the 2,200 acre site which the size of three times Central Park. Forty-eight teams representing more than 200 offices from around world submitted proposals, from which six finalists that mostly led by landscape architects were selected. In December 2001, a jury of architects, landscape architects and city officials unanimously selected Field Operations as the winner. The plan, named Lifescape, visualizes the gradual 20-year transformation of the whole Staten Island into a 'natural lifestyle island' recognizing that Staten Island is home to coastal wetlands that shelter one of the most diverse ecosystems in the New York metropolitan area. It suggested that an ecologically reconstituted Fresh Kills could become the center of integrated parks and greenways system on the island otherwise fragmented. The project will be one of the largest and most ambitious undertakings in the metropolis in years developing a complex web of habitats and parklands on top of mountain of trash. This study tries to achieve two goals: One is to provide general explanations on the project, Lifescape, breaking down to its background, geographical context, design concepts and phased development plan. Another is to introduce the unique and innovative design approaches by Field Operations that are different from a conventional landscape architectural attitude. Since this project was well published through many magazines and newspapers, main focus will be upon aspects that differentiate this project from usual landscape projects. Conceptually Lifescape brought provocative notions on nature/culture relationship and the role of urban park as an active agency rather than just a green rest area. Also this project introduced pioneering graphics like plan collage, diagrammatic plan, phasing diagram and photo montage as vehicles conveying information, imagination and provocation. Witnessing the influence of the project gradually in the field of academic and practice in the States, this study is intended to become a constructive reference to similar landscape projects dealing with large and complex urban context in conjunction with restructure of contemporary city.

Retrieval Scheme of XML Documents Using Link Queries (링크 질의를 통한 XML 문서의 검색 기법)

  • Mun, Chan-Ho;Gang, Hyeon-Cheol
    • The KIPS Transactions:PartD
    • /
    • v.8D no.4
    • /
    • pp.313-326
    • /
    • 2001
  • The XML that was proposed as a next-generation standard for describing Web documents is widely used in various Web-based applications. In addition, XML documents on the Web link each other by hyperlinks. The current works on XML focus on the XML storage system that can efficiently store, manage, and retrieve XML documents. However, the research on the query language that supports the XML links and on the XML retrieval systems to process the XML links, is little conducted until now. In this paper, we propose an extension of an XML query language for expressing the XML link query and its processing scheme. A link query is to retrieve contents from an XML document (a query document) and from the XML documents (referenced documents) that are referred to by the links in the query document. As far as retrieving from the referenced documents is concerned, the current practice is to manually generate queries to get the partial results, and to repeat such a procedure. The purpose of link query processing in this paper is to eliminate the manual work altogether in getting the complete query result. The performance analysis shows that our link query processing strategy outperforms the conventional approach including the manual tasks. The more links to the referenced documents and the more referenced documents there are in the site storing the query document, the more query processing time decreases.

  • PDF

SSQUSAR : A Large-Scale Qualitative Spatial Reasoner Using Apache Spark SQL (SSQUSAR : Apache Spark SQL을 이용한 대용량 정성 공간 추론기)

  • Kim, Jonghoon;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.2
    • /
    • pp.103-116
    • /
    • 2017
  • In this paper, we present the design and implementation of a large-scale qualitative spatial reasoner, which can derive new qualitative spatial knowledge representing both topological and directional relationships between two arbitrary spatial objects in efficient way using Aparch Spark SQL. Apache Spark SQL is well known as a distributed parallel programming environment which provides both efficient join operations and query processing functions over a variety of data in Hadoop cluster computer systems. In our spatial reasoner, the overall reasoning process is divided into 6 jobs such as knowledge encoding, inverse reasoning, equal reasoning, transitive reasoning, relation refining, knowledge decoding, and then the execution order over the reasoning jobs is determined in consideration of both logical causal relationships and computational efficiency. The knowledge encoding job reduces the size of knowledge base to reason over by transforming the input knowledge of XML/RDF form into one of more precise form. Repeat of the transitive reasoning job and the relation refining job usually consumes most of computational time and storage for the overall reasoning process. In order to improve the jobs, our reasoner finds out the minimal disjunctive relations for qualitative spatial reasoning, and then, based upon them, it not only reduces the composition table to be used for the transitive reasoning job, but also optimizes the relation refining job. Through experiments using a large-scale benchmarking spatial knowledge base, the proposed reasoner showed high performance and scalability.

Performance Enhancement Method Through Science DMZ Data Transfer Node Tuning Parameters (Science DMZ 데이터 전송 노드 튜닝 요소를 통한 성능 향상 방안)

  • Park, Jong Seon;Park, Jin Hyung;Kim, Seung Hae;Noh, Min Ki
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.2
    • /
    • pp.33-40
    • /
    • 2018
  • In an environment with a large network bandwidth, maximizing bandwidth utilization is an important issue to increase transmission efficiency. End-to-end transfer efficiency is significantly influenced by factors such as network, data transfer nodes, and intranet network security policies. Science DMZ is an innovative network architecture that maximizes transfer performance through optimal solution of these complex components. Among these, the data transfer node is a key factor that greatly affects the transfer performance depending on storage, network interface, operating system, and transfer application tool. However, tuning parameters constituting a data transfer node must be performed to provide high transfer efficiency. In this paper, we propose a method to enhance performance through tuning parameters of 100Gbps data transfer node. With experiment result, we confirmed that the transmission efficiency can be improved greatly in 100Gbps network environment through the tuning of Jumbo frame and CPU governor. The network performance test through Iperf showed improvement of 300% compared to the default state and NVMe SSD showed 140% performance improvement compared to hard disk.

Processing XML Queries Using XML Materialized Views : Decomposition of a Path Expression and Result Integration (XML 실체뷰를 이용한 XML 질의 처리 : 경로 표현식의 분할 처리 및 결과 통합)

  • Moon, Chan-Ho;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.10D no.4
    • /
    • pp.621-638
    • /
    • 2003
  • As demand of XML documents in the Web increases, Web service applications that manage XML documents as their resource are increasing. The view mechanism for XML data could be considered for effective in query processing in these Web service applications. If the XML query results are maintained as XML materialized views and their relevant XML query is processed using them, the query response time could be reduced. There are two types of processing an in path expression, which is one of the core features of XML query languages, using XML materialized views. One is the type where the complete query result is obtained from the materialized view, and the other is the type where some of the result is obtained from the materialized view and the rest is from the underlying XML documents. In this paper, we investigate the second type. An XML query in this paper is an XML path expression which is one of the core features of XML query languages. We first describe the storage structures of the XML materialized views derived from the underlying XML documents in the XML repository. Then, we propose the algorithms to decompose a given XML query into the subquery against the materialized view and the subquery against the underlying XML documents, and to integrate the results of these subqueries. Through performance evaluation, we figure out the condition under which our XML query decomposition using materialized views is more effective than the conventional processing.

A Scalable OWL Horst Lite Ontology Reasoning Approach based on Distributed Cluster Memories (분산 클러스터 메모리 기반 대용량 OWL Horst Lite 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.3
    • /
    • pp.307-319
    • /
    • 2015
  • Current ontology studies use the Hadoop distributed storage framework to perform map-reduce algorithm-based reasoning for scalable ontologies. In this paper, however, we propose a novel approach for scalable Web Ontology Language (OWL) Horst Lite ontology reasoning, based on distributed cluster memories. Rule-based reasoning, which is frequently used for scalable ontologies, iteratively executes triple-format ontology rules, until the inferred data no longer exists. Therefore, when the scalable ontology reasoning is performed on computer hard drives, the ontology reasoner suffers from performance limitations. In order to overcome this drawback, we propose an approach that loads the ontologies into distributed cluster memories, using Spark (a memory-based distributed computing framework), which executes the ontology reasoning. In order to implement an appropriate OWL Horst Lite ontology reasoning system on Spark, our method divides the scalable ontologies into blocks, loads each block into the cluster nodes, and subsequently handles the data in the distributed memories. We used the Lehigh University Benchmark, which is used to evaluate ontology inference and search speed, to experimentally evaluate the methods suggested in this paper, which we applied to LUBM8000 (1.1 billion triples, 155 gigabytes). When compared with WebPIE, a representative mapreduce algorithm-based scalable ontology reasoner, the proposed approach showed a throughput improvement of 320% (62k/s) over WebPIE (19k/s).

AFTL: An Efficient Adaptive Flash Translation Layer using Hot Data Identifier for NAND Flash Memory (AFTL: Hot Data 검출기를 이용한 적응형 플래시 전환 계층)

  • Yun, Hyun-Sik;Joo, Young-Do;Lee, Dong-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.1
    • /
    • pp.18-29
    • /
    • 2008
  • NAND Flash memory has been growing popular storage device for the last years because of its low power consumption, fast access speed, shock resistance and light weight properties. However, it has the distinct characteristics such as erase-before-write architecture, asymmetric read/write/erase speed, and the limitation on the number of erasure per block. Due to these limitations, various Flash Translation Layers (FTLs) have been proposed to effectively use NAND flash memory. The systems that adopted the conventional FTL may result in severe performance degradation by the hot data which are frequently requested data for overwrite in the same logical address. In this paper, we propose a novel FTL algorithm called Adaptive Flash Translation Layer (AFTL) which uses sector mapping method for hot data and log-based block mapping method for cold data. Our system removes the redundant write operations and the erase operations by the separating hot data from cold data. Moreover, the read performance is enhanced according to sector translation that tends to use a few read operations. A series of experiments was organized to inspect the performance of the proposed method, and they show very impressive results.