• Title/Summary/Keyword: Directory Structure

Search Result 46, Processing Time 0.023 seconds

A Study on the Directory Classification Schemes of the Design Portal Site (디자인 전문 포탈 사이트의 디렉토리 구축체계에 관한 연구)

  • 임경란
    • Archives of design research
    • /
    • v.15 no.2
    • /
    • pp.223-232
    • /
    • 2002
  • As the Internet becomes widespread as a significant tool of obtaining information, there is a growing demand for a system to efficiently organize and manage information on the Internet. Accordingly, research on the directory classification structure that directly affects the efficiency of a users information search is actively investigated in every field. The study intends to suggest an efficient classification structure by comparing and analyzing the directory classification structure of current design portal sites with the theory of literature classification structure, in order to increase the efficiency of search according to the directory classification structure of design sector.

  • PDF

Comparison of Directory Structures for SAN Based Very Large File Systems (SAN 환경 대용량 파일 시스템을 위한 디렉토리 구조 비교)

  • 김신우;이용규
    • The Journal of Society for e-Business Studies
    • /
    • v.9 no.1
    • /
    • pp.83-104
    • /
    • 2004
  • Recently, information systems that require storage and retrieval of huge amount of data are becoming used widely. Accordingly, research efforts have been made to develop Linux cluster file systems in the SAN environment in which clients themselves can manage metadata and access data directly. Also a semi-flat directory structure based on extendible hashing has been proposed to support fast retrieval of files[1]. In this research, we have designed and implemented the semi-flat extendible hash directory under the Linux system. In order to evaluate the practicality of the directory, we have also implemented the B+-tree based directory and experimented the performance. According to the performance comparisons, the extendible hash directory has the better performance at insert, delete, and search operations. On the other hand, the B+-tree directory is better at sorting files.

  • PDF

A File/Directory Reconstruction Method of APFS Filesystem for Digital Forensics

  • Cho, Gyu-Sang;Lim, Sooyeon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.8-16
    • /
    • 2022
  • In this paper, we propose a method of reconstructing the file system to obtain digital forensics information from the APFS file system when meta information that can know the structure of the file system is deleted due to partial damage to the disk. This method is to reconstruct the tree structure of the file system by only retrieving the B-tree node where file/directory information is stored. This method is not a method of constructing nodes based on structural information such as Container Superblock (NXSB) and Volume Checkpoint Superblock (APSB), and B-tree root and leaf node information. The entire disk cluster is traversed to find scattered B-tree leaf nodes and to gather all the information in the file system to build information. It is a method of reconstructing a tree structure of a file/directory based on refined essential data by removing duplicate data. We demonstrate that the proposed method is valid through the results of applying the proposed method by generating numbers of user files and directories.

A Digital Forensic Analysis for Directory in Windows File System (Windows 파일시스템의 디렉토리에 대한 디지털 포렌식 분석)

  • Cho, Gyusang
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.2
    • /
    • pp.73-90
    • /
    • 2015
  • When we apply file commands on files in a directory, the directory as well as the file suffer changes in timestamps of MFT entry. Based on understanding of these changes, this work provides a digital forensic analysis on the timestamp changes of the directory influenced by execution of file commands. NTFS utilizes B-tree indexing structure for managing efficient storage of a huge number of files and fast lookups, which changes an index tree of the directory index when files are operated by commands. From a digital forensic point of view, we try to understand behaviors of the B-tree indexes and are looking for traces of files to collect information. But it is not easy to analyze the directory index entry when the file commands are executed. And researches on a digital forensic about NTFS directory and B-tree indexing are comparatively rare. Focusing on the fact, we present, in this paper, directory timestamp changes after executing file commands including a creation, a copy, a deletion etc are analyzed and a method for finding forensic evidences of a deletion of directory containing files. With some cases, i.e. examples of file copy and file deletion command, analyses on the problem of timestamp changes of the directory are given and the problem of finding evidences of a deletion of directory containging files are shown.

An Implementation and Evaluation of Large-Scale Dynamic Hashing Directories (대규모 동적 해싱 디렉토리의 구현 및 평가)

  • Kim, Shin-Woo;Lee, Yong-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.7
    • /
    • pp.924-942
    • /
    • 2005
  • Recently, large-scale directories have been developed for LINUX cluster file systems to store and retrieve huge amount of data. One of them, GFS directory, has attracted much attention because it is based on extendible hashing, one of dynamic hashing techniques, to support fast access to files. One distinctive feature of the GFS directory is the flat structure where all the leaf nodes are located at the same level of the tree. Hut one disadvantage of the mode structure is that the height of the mode tree has to be increased to make the tree flat after a byte is inserted to a full tree which cannot accommodate it. Thus, one byte addition makes the height of the whole mode tree grow, and each data block of the new tree needs one more link access than the old one. Another dynamic hashing technique which can be used for directories is linear hashing and a couple of researches have shown that it can get better performance at file access times than extendible hashing. [n this research, we have designed and implemented an extendible hashing directory and a linear hashing directory for large-scale LINUX cluster file systems and have compared performance between them. We have used the semi-flat structure which is known to have better access performance than the flat structure. According to the results of the performance evaluation, the linear hashing directory has shown slightly better performance at file inserts and accesses in most cases, whereas the extendible hashing directory is somewhat better at space utilization.

  • PDF

A Robust LDAP Server Using Group Communication (그룹통신을 이용한 견고한 LDAP 서버)

  • Moon, Nam-Doo;Ahn, Geon-Tae;Park, Yang-Soo;Lee, Myung-Joon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.2
    • /
    • pp.199-208
    • /
    • 2003
  • LDAP (Lightweight Directory Access Protocol) Directory Service provides information for locating resources like files and devices over the network such as Internet or Intranet. Since LDAP is widely accepted as one of the standard directory service structure for the Internet, it is desirable that a group of LDAP servers works transparently and continuously even if the related network partitions temporally, through maintaining replicated directory information among those LDAP servers. In this paper, we describe the design and implementation of a robust LDAP sewer, which runs as a process group in JACE group communication system, and the associated LDAP service provider which enables Java applications to use the developed LDAP directory service.

A Study on the Future Storage System as brain coordinator

  • Yi, Cheon-Hee;Yi, Jae-Young
    • Journal of the Semiconductor & Display Technology
    • /
    • v.8 no.1
    • /
    • pp.39-42
    • /
    • 2009
  • In this paper an attempt for realizing a storage system which works as a part of human brain has been discussed. The system is expected to be able to coordinate with human brain. And current storage may have inherent problem due to an intrinsic attribute of storage, exclusiveness. Directory structure in it must be a source of confusion, if it used out side of the range of limitation. Adapting multidimensional annotation of file name extension and directory-less file system, a new storage system able to associate and coordinate with human brain may be available near future. This paper showed that the limitation of current storage system clearly exists, because of human brain limitation to memorize directory name.

  • PDF

Ordinary B-tree vs NTFS B-tree: A Digital Forensics Perspectives

  • Cho, Gyu-Sang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.8
    • /
    • pp.73-83
    • /
    • 2017
  • In this paper, we discuss the differences between an ordinary B-tree and B-tree implemented by NTFS. There are lots of distinctions between the two B-tree, if not understand the distinctions fully, it is difficult to utilize and analyze artifacts of NTFS. Not much, actually, is known about the implementation of NTFS, especially B-tree index for directory management. Several items of B-tree features are performed that includes a node size, minimum number of children, root node without children, type of key, key sorting, type of pointer to child node, expansion and reduction of node, return of node. Furthermore, it is emphasized the fact that NTFS use B-tree structure not B+structure clearly.

A Study on Secure Kerberos Authentication using Trusted Authority in Network Structure (네트웍 환경에서 안전한 Kerberos 인증 메커니즘에 관한 연구)

  • 신광철;정진욱
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.12 no.2
    • /
    • pp.123-133
    • /
    • 2002
  • In Network Environment, Kerberos certification mechanism to require Kerberos server in other area unconditionally belief. Also, Kerberos server in cooperation area must be share server of other area and secret key. To solve these two problems, this paper proposed safe security mechanism of doing to ably IETF CAT's PKINIT/PKCROSS a1gorithm with Public Key Infrastructure and use Directory System and service between realms do trust and prove each Kerberos trust center base. Also, Although Kerberos server of each area must be foreknowing each server's secret key and public key, Obtain through Trust center and acquire each area's public key and common symmetric key, Application server excluded process that must register key in Key Distribution Center.

Design and Implementation of a Metadata Structure for Large-Scale Shared-Disk File System (대용량 공유디스크 파일 시스템에 적합한 메타 데이타 구조의 설계 및 구현)

  • 이용주;김경배;신범주
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.1
    • /
    • pp.33-49
    • /
    • 2003
  • Recently, there have been large storage demands for manipulating multimedia data. To solve the tremendous storage demands, one of the major researches is the SAN(Storage Area Network) that provides the local file requests directly from shared-disk storage and also eliminates the server bottlenecks to performance and availability. SAN also improve the network latency and bandwidth through new channel interface like FC(Fibre Channel). But to manipulate the efficient storage network like SAN, traditional local file system and distributed file system are not adaptable and also are lack of researches in terms of a metadata structure for large-scale inode object such as file and directory. In this paper, we describe the architecture and design issues of our shared-disk file system and provide the efficient bitmap for providing the well-formed block allocation in each host, extent-based semi flat structure for storing large-scale file data, and two-phase directory structure of using Extendible Hashing. Also we describe a detailed algorithm for implementing the file system's device driver in Linux Kernel and compare our file system with the general file system like EXT2 and shard disk file system like GFS in terms of file creation, directory creation and I/O rate.