• Title/Summary/Keyword: Files

Search Result 2,036, Processing Time 0.027 seconds

Metadata Structrues of Huge Shared Disk File System for Large Files in GIS (GIS에서 대용량 파일을 위한 대용량 공유 디스크 파일시스템의 메타데이터 구조)

  • 김경배;이용주;박춘서;신범주
    • Spatial Information Research
    • /
    • v.10 no.1
    • /
    • pp.93-106
    • /
    • 2002
  • The traditional file system are designed to store and manage fur small size files. So. we cannot process the huge files related with geographic information data using the traditional file system such as unix file system or linux file system. In this paper, we propose new metadata structures and management mechanisms for the large file system in geographic information system. The proposed mechanisms use dynamic multi-level mode for large files and dynamic bitmap for huge file system. We implement the proposed mechanisms in the metadata structures of SANtopia is shared disk huge file system for storage area networks(SAN).

  • PDF

Implementation of big web logs analyzer in estimating preferences for web contents (웹 컨텐츠 선호도 측정을 위한 대용량 웹로그 분석기 구현)

  • Choi, Eun Jung;Kim, Myuhng Joo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.4
    • /
    • pp.83-90
    • /
    • 2012
  • With the rapid growth of internet infrastructure, World Wide Web is evolving recently into various services such as cloud computing, social network services. It simply go beyond the sharing of information. It started to provide new services such as E-business, remote control or management, providing virtual services, and recently it is evolving into new services such as cloud computing and social network services. These kinds of communications through World Wide Web have been interested in and have developed user-centric customized services rather than providing provider-centric informations. In these environments, it is very important to check and analyze the user requests to a website. Especially, estimating user preferences is most important. For these reasons, analyzing web logs is being done, however, it has limitations that the most of data to analyze are based on page unit statistics. Therefore, it is not enough to evaluate user preferences only by statistics of specific page. Because recent main contents of web page design are being made of media files such as image files, and of dynamic pages utilizing the techniques of CSS, Div, iFrame etc. In this paper, large log analyzer was designed and executed to analyze web server log to estimate web contents preferences of users. With mapreduce which is based on Hadoop, large logs were analyzed and web contents preferences of media files such as image files, sounds and videos were estimated.

Tailoring Operations based on Relational Algebra for XES-based Workflow Event Logs

  • Yun, Jaeyoung;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Process mining is state-of-the-art technology in the workflow field. Recently, process mining becomes more important because of the fact that it shows the status of the actual behavior of the workflow model. However, as the process mining get focused and developed, the material of the process mining - workflow event log - also grows fast. Thus, the process mining algorithms cannot operate with some data because it is too large. To solve this problem, there should be a lightweight process mining algorithm, or the event log must be divided and processed partly. In this paper, we suggest a set of operations that control and edit XES based event logs for process mining. They are designed based on relational algebra, which is used in database management systems. We designed three operations for tailoring XES event logs. Select operation is an operation that gets specific attributes and excludes others. Thus, the output file has the same structure and contents of the original file, but each element has only the attributes user selected. Union operation makes two input XES files into one XES file. Two input files must be from the same process. As a result, the contents of the two files are integrated into one file. The final operation is a slice. It divides anXES file into several files by the number of traces. We will show the design methods and details below.

Analysis of the simpleRTJ Class File Format (simpleRTJ 클래스 파일의 형식 분석)

  • 양희재
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.373-377
    • /
    • 2002
  • Unlike desktop systems, embedded systems usually meet a strict restriction on using memory. It is required to allocate several class files on memory to run a Java program. A Java class file contains such data including a constant pool, class overview, fields information, and methods information. Some of them are used merely for a debugging purpose, others for a program execution. This paper analyzes the internal structure, or format of the class files used for embedded Java systems. We also investigate how much memory will be necessary for each part of the class files when the files are allocated in memory. The experiment was performed on the simpleRTJ, an open-source commercial embedded Java system.

  • PDF

Media-oriented e-Learning System supporting Execution-File Demonstration (실행파일 시연기능을 지원하는 미디어 지향적 e-러닝 시스템)

  • Jou, Wou-Seok;Lee, Kang-Sun;Meng, Je-An
    • The KIPS Transactions:PartA
    • /
    • v.13A no.6 s.103
    • /
    • pp.555-560
    • /
    • 2006
  • In contrast with the earlier remote education that simply recorded off-line classes, modern remote education emphasizes on offering additional functions that could maximize learning efficiency. Usage of such multimedia information as the texts, graphics, sounds, animations is considered fundamental element in offering the additional functions. This paper designs and implements an encoder/decoder that could accommodate the multimedia information with emphasis on demonstrating execution files. Instructors can demonstrate my type of execution files or application data files, and the remote learners can freely try running the corresponding execution files by themselves. Consequently, a high-level of learning efficiency can be achieved by the proposed encoder/decoder.

Cyclic fatigue life of Tango-Endo, WaveOne GOLD, and Reciproc NiTi instruments

  • Yilmaz, Koray;Ozyurek, Taha
    • Restorative Dentistry and Endodontics
    • /
    • v.42 no.2
    • /
    • pp.134-139
    • /
    • 2017
  • Objectives: To compare the fatigue life of Tango-Endo, WaveOne GOLD, and Reciproc NiTi instruments under static model via artificial canals with different angles of curvature. Materials and Methods: Reciproc R25, WaveOne GOLD Primary, and Tango-Endo instruments were included in this study (n = 20). All the instruments were rotated in artificial canals which were made of stainless steel with an inner diameter of 1.5 mm, $45^{\circ}$, $60^{\circ}$, and $90^{\circ}$ angles of curvatures and a radius of curvature of 5 mm until fracture occurred, and the time to fracture was recorded in seconds using a digital chronometer. The data were analyzed using Kruskal-Wallis and post-hoc Dunn tests were used for the statistical analysis of data in SPSS 21.0 software. Results: Tango-Endo files were found to have significantly higher values than WaveOne GOLD and Reciproc files in terms of fatigue life (p < 0.05). However, there was no statistically significant difference between fatigue life of Reciproc and WaveOne GOLD files (p > 0.05). It was determined that increasing the angle of curvature of the stainless canals caused significant decreases in fatigue life of all of three files (p < 0.05). Conclusions: Within the limitations of the present study, the cyclic fatigue life of Tango-Endo in canals having different angles of curvature was statistically higher than Reciproc and WaveOne GOLD.

Forecasted Popularity Based Lazy Caching Strategy (예측된 선호도 기반 게으른 캐싱 전략)

  • Park, Chul;Yoo, Hae-Young
    • The KIPS Transactions:PartA
    • /
    • v.10A no.3
    • /
    • pp.261-268
    • /
    • 2003
  • In this paper, we propose a new caching strategy for web servers. The proposed strategy collects only the statistics of the requested file, for example the popularity, when a request arrives. At a point of time, only files with higher forecasted popularity are cached all together. Forecasted popularity based lazy caching (FPLC) strategy uses exponential smoothing method for forecast popularity of web files. And, FPLC strategy shows that the cache hit ratio and the cache transfer ratio are better than those produced by other caching strategy. Furthermore, the experiment that is performed with real log files built from web servers shows our study on forecast method for popularity of web files improves cache efficiency.

Sim-Hadoop : Leveraging Hadoop Distributed File System and Parallel I/O for Reliable and Efficient N-body Simulations (Sim-Hadoop : 신뢰성 있고 효율적인 N-body 시뮬레이션을 위한 Hadoop 분산 파일 시스템과 병렬 I / O)

  • Awan, Ammar Ahmad;Lee, Sungyoung;Chung, Tae Choong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.476-477
    • /
    • 2013
  • Gadget-2 is a scientific simulation code has been used for many different types of simulations like, Colliding Galaxies, Cluster Formation and the popular Millennium Simulation. The code is parallelized with Message Passing Interface (MPI) and is written in C language. There is also a Java adaptation of the original code written using MPJ Express called Java Gadget. Java Gadget writes a lot of checkpoint data which may or may not use the HDF-5 file format. Since, HDF-5 is MPI-IO compliant, we can use our MPJ-IO library to perform parallel reading and writing of the checkpoint files and improve I/O performance. Additionally, to add reliability to the code execution, we propose the usage of Hadoop Distributed File System (HDFS) for writing the intermediate (checkpoint files) and final data (output files). The current code writes and reads the input, output and checkpoint files sequentially which can easily become bottleneck for large scale simulations. In this paper, we propose Sim-Hadoop, a framework to leverage HDFS and MPJ-IO for improving the I/O performance of Java Gadget code.

Isonumber based Iso-Key Interchange Protocol for Network Communication

  • Dani, Mamta S.;Meshram, Akshaykumar;Pohane, Rupesh;Meshram, Rupali R.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.209-213
    • /
    • 2022
  • Key exchange protocol (KEP) is an essential setup to secure authenticates transmission among two or more users in cyberspace. Digital files protected and transmitted by the encryption of the files over public channels, a single key communal concerning the channel parties and utilized for both to encrypt the files as well as decrypt the files. If entirely done, this impedes unauthorized third parties from imposing a key optimal on the authorized parties. In this article, we have suggested a new KEP term as isokey interchange protocol based on generalization of modern mathematics term as isomathematics by utilizing isonumbers for corresponding isounits over the Block Upper Triangular Isomatrices (BUTI) which is secure, feasible and extensible. We also were utilizing arithmetic operations like Isoaddition, isosubtraction, isomultiplication and isodivision from isomathematics to build iso-key interchange protocol for network communication. The execution of our protocol is for two isointegers corresponding two elements of the group of isomatrices and cryptographic performance of products eachother. We demonstrate the protection of suggested isokey interchange protocol against Brute force attacks, Menezes et al. algorithm and Climent et al. algorithm.

Access efficiency of small sized files in Big Data using various Techniques on Hadoop Distributed File System platform

  • Alange, Neeta;Mathur, Anjali
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.359-364
    • /
    • 2021
  • In recent years Hadoop usage has been increasing day by day. The need of development of the technology and its specified outcomes are eagerly waiting across globe to adopt speedy access of data. Need of computers and its dependency is increasing day by day. Big data is exponentially growing as the entire world is working in online mode. Large amount of data has been produced which is very difficult to handle and process within a short time. In present situation industries are widely using the Hadoop framework to store, process and produce at the specified time with huge amount of data that has been put on the server. Processing of this huge amount of data having small files & its storage optimization is a big problem. HDFS, Sequence files, HAR, NHAR various techniques have been already proposed. In this paper we have discussed about various existing techniques which are developed for accessing and storing small files efficiently. Out of the various techniques we have specifically tried to implement the HDFS- HAR, NHAR techniques.