• Title/Summary/Keyword: information storage

Search Result 4,402, Processing Time 0.036 seconds

Design and Implementation of the Flash File System that Maintains Metadata in Non-Volatile RAM (메타데이타를 비휘발성 램에 유지하는 플래시 파일시스템의 설계 및 구현)

  • Doh, In-Hwan;Choi, Jong-Moo;Lee, Dong-Hee;Noh, Sam-H.
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.2
    • /
    • pp.94-101
    • /
    • 2008
  • Non-volatile RAM (NVRAM) is a form of next-generation memory that has both characteristics of nonvolatility and byte addressability each of which can be found in nonvolatile storage and RAM, respectively. The advent of NVRAM may possibly bring about drastic changes to the system software landscape. When NVRAM is efficiently exploited in the system software layer, we expect that the system performance can be significantly improved. In this regards, we attempt to develop a new Flash file system, named MiNVFS (Metadata in NVram File System). MiNVFS maintains all the metadata in NVRAM, while storing all file data in Flash memory. In this paper, we present quantitative experimental results that show how much performance gains can be possible by exploiting NVRAM. Compared to YAFFS, a typical Flash file system, we show that MiNVFS requires only minimal time for mounting. MiNVFS outperforms YAFFS by an average of around 400% in terms of the total execution time for the realistic workloads that we considered.

Scalable RDFS Reasoning Using the Graph Structure of In-Memory based Parallel Computing (인메모리 기반 병렬 컴퓨팅 그래프 구조를 이용한 대용량 RDFS 추론)

  • Jeon, MyungJoong;So, ChiSeoung;Jagvaral, Batselem;Kim, KangPil;Kim, Jin;Hong, JinYoung;Park, YoungTack
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.998-1009
    • /
    • 2015
  • In recent years, there has been a growing interest in RDFS Inference to build a rich knowledge base. However, it is difficult to improve the inference performance with large data by using a single machine. Therefore, researchers are investigating the development of a RDFS inference engine for a distributed computing environment. However, the existing inference engines cannot process data in real-time, are difficult to implement, and are vulnerable to repetitive tasks. In order to overcome these problems, we propose a method to construct an in-memory distributed inference engine that uses a parallel graph structure. In general, the ontology based on a triple structure possesses a graph structure. Thus, it is intuitive to design a graph structure-based inference engine. Moreover, the RDFS inference rule can be implemented by utilizing the operator of the graph structure, and we can thus design the inference engine according to the graph structure, and not the structure of the data table. In this study, we evaluate the proposed inference engine by using the LUBM1000 and LUBM3000 data to test the speed of the inference. The results of our experiment indicate that the proposed in-memory distributed inference engine achieved a performance of about 10 times faster than an in-storage inference engine.

Data Block based User Authentication for Outsourced Data (아웃소싱 데이터 보호를 위한 데이터 블록 기반의 상호 인증 프로토콜)

  • Hahn, Changhee;Kown, Hyunsoo;Kim, Daeyeong;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1175-1184
    • /
    • 2015
  • Recently, there has been an explosive increase in the volume of multimedia data that is available as a result of the development of multimedia technologies. More and more data is becoming available on a variety of web sites, and it has become increasingly cost prohibitive to have a single data server store and process multimedia files locally. Therefore, many service providers have been likely to outsource data to cloud storage to reduce costs. Such behavior raises one serious concern: how can data users be authenticated in a secure and efficient way? The most widely used password-based authentication methods suffer from numerous disadvantages in terms of security. Multi-factor authentication protocols based on a variety of communication channels, such as SMS, biometric, or hardware tokens, may improve security but inevitably reduce usability. To this end, we present a data block-based authentication scheme that is secure and guarantees usability in such a manner where users do nothing more than enter a password. In addition, the proposed scheme can be effectively used to revoke user rights. To the best of our knowledge, our scheme is the first data block-based authentication scheme for outsourced data that is proven to be secure without degradation in usability. An experiment was conducted using the Amazon EC2 cloud service, and the results show that the proposed scheme guarantees a nearly constant time for user authentication.

Dependency-based Framework of Combining Multiple Experts for Recognizing Unconstrained Handwritten Numerals (무제약 필기 숫자를 인식하기 위한 다수 인식기를 결합하는 의존관계 기반의 프레임워크)

  • Kang, Hee-Joong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.8
    • /
    • pp.855-863
    • /
    • 2000
  • Although Behavior-Knowledge Space (BKS) method, one of well known decision combination methods, does not need any assumptions in combining the multiple experts, it should theoretically build exponential storage spaces for storing and managing jointly observed K decisions from K experts. That is, combining K experts needs a (K+1)st-order probability distribution. However, it is well known that the distribution becomes unmanageable in storing and estimating, even for a small K. In order to overcome such weakness, it has been studied to decompose a probability distribution into a number of component distributions and to approximate the distribution with a product of the component distributions. One of such previous works is to apply a conditional independence assumption to the distribution. Another work is to approximate the distribution with a product of only first-order tree dependencies or second-order distributions as shown in [1]. In this paper, higher order dependency than the first-order is considered in approximating the distribution and a dependency-based framework is proposed to optimally approximate the (K+1)st-order probability distribution with a product set of dth-order dependencies where ($1{\le}d{\le}K$), and to combine multiple experts based on the product set using the Bayesian formalism. This framework was experimented and evaluated with a standardized CENPARMI data base.

  • PDF

A Study on IPA-based Competitiveness Enhancement Measures for Regular Freight Service (IPA분석을 이용한 정기화물운송업의 경쟁력 강화방안에 관한 연구)

  • Lee, Young-Jae;Park, Soo-Hong;Sun, Il-Suck
    • Journal of Distribution Science
    • /
    • v.13 no.1
    • /
    • pp.83-91
    • /
    • 2015
  • Purpose - Despite the structural irrationality of multi-level transportation and the oil price rise, the domestic freight transportation market continues to grow, mirroring the rise in e-commerce and resultant increase in courier services and freight volumes. Several studies on courier services have been conducted. However, few studies or statistics have been published regarding regular freight services although they have played a role in the freight service market. The present study identifies the characteristics of regular freight service users to seek competitiveness enhancement measures specific to regular freight services. Research design, data, and methodology - IPA is a comparative analysis of the relative importance of and satisfaction with each attribute simultaneously. This study used IPA because it facilitates the process of analyzing importance and performance, deriving implications and a visual understanding of results. To enhance the competitiveness of regular freight services, this study surveyed its current users regarding the importance of the regular freight service factors. A total of 200 copies of a questionnaire were circulated and 190 copies were returned. In addition to demographics, respondents answered questions about the importance of and satisfaction with services on a 5-point Likert scale. Excluding 3 inappropriate copies, 187 out of 190 copies were analyzed. PASW Statistics 18 was used for statistical analysis. A total of 20 question items were selected for the service factors presented in the questionnaire based on the 1st pilot survey and previous studies. Results - According to the IPA performed to compare the importance of and satisfaction with service factors, both importance and satisfaction are high in the 1st quadrant, which involves the economic advantage of using regular freight services, quick arrival at destinations, weight freight handling, and less time constraints on freight receipt/dispatch. This area requires continuous management. Satisfaction is higher than importance in the 2nd quadrant, which involves the adequacy of freight, cost savings over ordinary courier services, notification on freight arrival, and freight tracking information. This area requires intensive investment and management. Satisfaction is lower than importance in the 3rd quadrant, involving the credit card payment system, courier delivery service, distance to freight handling sites, easy access to freight handling sites, and prompt problem solving. This area requires further intensive management. Both importance and satisfaction are low in the 4th quadrant, involving the availability of collection service, storage space at freight handling sites, kindness of collection/delivery staff, kindness of outlet staff, and easy delivery checks. This area is a set of variables should be excluded from priority control targets. Conclusions - Based on the IPA, service factors that need priority controls because of high importance and low satisfaction include the credit card payment system, delivery service, distance to freight handling sites, easy access to freight handling sites, and prompt problem solving. The findings need to be applied to future marketing strategies for regular freight services and for developing competitiveness enhancement programs.

A Power-aware Branch Predictor for Embedded Processors (내장형 프로세서를 위한 저전력 분기 예측기 설계 기법)

  • Kim, Cheol-Hong;Song, Sung-Gun
    • The KIPS Transactions:PartA
    • /
    • v.14A no.6
    • /
    • pp.347-356
    • /
    • 2007
  • In designing a branch predictor, in addition to accuracy, microarchitects should consider power consumption, especially for embedded processors. This paper proposes a power-aware branch predictor, which is based on the gshare predictor, by accessing the BTB (Branch Target Buffer) only when the prediction from the PHT (Pattern History Table) is taken. To enable the selective access to the BTB, the PHT in the proposed branch predictor is accessed one cycle earlier than the traditional PHT to prevent the additional delay. As a side effect, two predictions from the PHT are obtained through one access to the PHT, which leads to more power savings. The proposed branch predictor reduces the power consumption, not requiring any additional storage arrays, not incurring additional delay (except just one MUX delay) and never harming accuracy. Simulation results show that the proposed predictor reduces the power consumption by $35{\sim}48%$ compared to the traditional predictor.

A Study of Standard eBook Contents Conversion (전자책 표준간의 컨텐츠 변환에 관한 연구)

  • Ko, Seung-Kyu;Sohn, Won-Sung;Lim, Soon-Bum;Choy, Yoon-Chul
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.267-276
    • /
    • 2003
  • Many countries have established eBook standards adequate to their environments. In USA, OEB PS is announced for distribution and display of eBooks, in Japan, JepaX is announced for storage and exchange, and in Korea, EBKS is made for clear exchange of eBook contents. These diverse objectives lead to different content structures. These variety of content structure will cause a problem in exchanging them. To correctly exchange eBook contents, the content structure should be considered. So, In this paper, we study conversion methods of standard eBooks contents based on Korean eBook standard, with contemplating content structure. To convert contents properly, the mapping relations should be clearly defined. For this, we consider standard's structure and extension mechanisms, and use path notations and namespaces for precise description. Moreover, through analysis of each mapping relationships, we classify conversion cases into automatic, semi-automatic, and manual conversions. Finally we write up conversion scripts and experiment with them.

An Efficient Data Block Replacement and Rearrangement Technique for Hybrid Hard Disk Drive (하이브리드 하드디스크를 위한 효율적인 데이터 블록 교체 및 재배치 기법)

  • Park, Kwang-Hee;Lee, Geun-Hyung;Kim, Deok-Hwan
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.1
    • /
    • pp.1-10
    • /
    • 2010
  • Recently heterogeneous storage system such as hybrid hard disk drive (H-HDD) combining flash memory and magnetic disk is launched, according as the read performance of NAND flash memory is enhanced as similar to that of hard disk drive (HDD) and the power consumption of NAND flash memory is reduced less than that of HDD. However, the read and write operations of NAND flash memory are slower than those of rotational disk. Besides, serious overheads are incurred on CPU and main memory in the case that intensive write requests to flash memory are repeatedly occurred. In this paper, we propose the Least Frequently Used-Hot scheme that replaces the data blocks whose reference frequency of read operation is low and update frequency of write operation is high, and the data flushing scheme that rearranges the data blocks into the multi-zone of the rotation disk. Experimental results show that the execution time of the proposed method is 38% faster than those of conventional LRU and LFU block replacement schemes in I/O performance aspect and the proposed method increases the life span of Non-Volatile Cache 40% higher than those of conventional LRU, LFU, FIFO block replacement schemes.

A Distributed Method for Constructing a P2P Overlay Multicast Network using Computational Intelligence (지능적 계산법을 이용한 분산적 P2P 오버레이 멀티케스트 네트워크 구성 기법)

  • Park, Jaesung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.6
    • /
    • pp.95-102
    • /
    • 2012
  • In this paper, we propose a method that can construct efficiently a P2P overlay multicast network composed of many heterogeneous peers in communication bandwidth, processing power and a storage size by selecting a peer in a distributed fashion using an ant-colony theory that is one of the computational intelligence methods. The proposed method considers not only the capacity of a peer but also the number of children peers supported by the peer and the hop distance between a multicast source and the peer when selecting a parent peer of a newly joining node. Thus, an P2P multicast overlay network is constructed efficiently in that the distances between a multicast source and peers are maintained small. In addition, the proposed method works in a distributed fashion in that peers use their local information to find a parent node. Thus, compared to a centralized method where a centralized server maintains and controls the overlay construction process, the proposed method scales well. Through simulations, we show that, by making a few high capacity peers support a lot of low capacity peers, the proposed method can maintain the size of overlay network small even there are a few thousands of peers in the network.

Estimation of Carbon Stock by Development of Stem Taper Equation and Carbon Emission Factors for Quercus serrata (수간곡선식 개발과 국가탄소배출계수를 이용한 졸참나무의 탄소저장량 추정)

  • Kang, Jin-Taek;Son, Yeong-Mo;Jeon, Ju-Hyeon;Yoo, Byung-Oh
    • Journal of Climate Change Research
    • /
    • v.6 no.4
    • /
    • pp.357-366
    • /
    • 2015
  • This study was conducted to estimate carbon stocks of Quercus serrata with drawing volume of trees in each tree height and DBH applying the suitable stem taper equation and tree specific carbon emission factors, using collected growth data from all over the country. Information on distribution area, tree number per hectare, tree volume and volume stocks were obtained from the $5^{th}$ National Forest Inventory (2006~2010), and method provided in IPCC GPG was applied to estimate carbon storage and removals. Performance in predicting stem diameter at a specific point along a stem in Quercus serrata by applying Kozak's model,$d=a_1DBH^{a_2}a_3^{DBH}X^{b_1Z^2+b_2ln(Z+0.001)+b_3{\sqrt{Z}}+b_4e^Z+b_5({\frac{DBH}{H}})}$, which is well known equation in stem taper estimation, was evaluated with validations statistics, Fitness Index, Bias and Standard Error of Bias. Consequently, Kozak's model turned out to be suitable in all validations statistics. Stem volume tables of Quercus serrata were derived by applying Kozak's model and carbon stock tables in each tree height and DBH were developed with country-specific carbon emission factors ($WD=0.65t/m^3$, BEF=1.55, R=0.43) of Quercus serrata. As a result of carbon stock analysis by age class in Quercus serrata, carbon stocks of IV age class (11,358 ha, 36.5%) and V age class (10,432; 33.5%) which take up the largest area in distribution of age class were 957,000 tC and 1,312,000 tC. Total carbon stocks of Quercus serrata were 3,191,000 tC which is 3% compared with total percentage of broad-leaved forest and carbon sequestration per hectare(ha) was 3.8 tC/ha/yr, $13.9tCO_2/ha/yr$, respectively.