• Title/Summary/Keyword: Information Data Format

Search Result 1,171, Processing Time 0.025 seconds

A Study on the MARC Format for Classification Data (분류용 MARC 포맷에 관한 연구)

  • Oh Dong-Geun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.33 no.1
    • /
    • pp.87-111
    • /
    • 1999
  • This article investigates the functions, needs, and developments of the MARC format for classification data. and recommends the development for the KORMARC format for classification data. It ae analyzes the record structure, content designation and the content of it mainly based on USMARC format. Structure and content designation are almost same with those of the bibliographic and authority formats. The data fields divided into functional blocks based on their functions. Record contents of the data in the fixed-length fields include more elements on the classification numbers, including type of number, classification validity, standard or optional number, synthesized number. Variable fields can be grouped into several blocks, inducing those for numbers and codes: for classification numbers and terms; for references and tracings; for notes fields: for index terms fields, and for number building fields. Data in each fields of this format have the same contents with those in other related fields as soon as possible. This article analyzes the content in each data fields in detail.

  • PDF

The Modeling of the Optimal Data Format for JPEG2000 CODEC on the Fixed Compression Ratio (고정 압축률에서의 JPEG2000 코덱을 위한 최적의 데이터 형식 모델링)

  • Seo, Choon-Weon
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.19 no.6
    • /
    • pp.109-116
    • /
    • 2005
  • Recently, images/videos have been preferred as the communication media because of their information-implication and easy recognizability. But the amount of their data is so large that it has been the major research area to compress their data. This paper is related to optimization in th image data format, which can make a great effect in performance of data compression and is based on the wavelet transform and JPEG2000. This paper established a criterion to decide the data format to be used in wavelet transform, which is on the bases of the data errors in frequency transform and quantization. This criterion has been used to extract the optimal data format experimentally. The result were(1, 9) of 10-bit fixed-point format for filter coefficients and (9, 7) of 16-bit fixed-point data dormat for wavelet coefficients and their optimality was confirmed.

CNN based IEEE 802.11 WLAN frame format detection (CNN 기반의 IEEE 802.11 WLAN 프레임 포맷 검출)

  • Kim, Minjae;Ahn, Heungseop;Choi, Seungwon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.16 no.2
    • /
    • pp.27-33
    • /
    • 2020
  • Backward compatibility is one of the key issues for radio equipment supporting IEEE 802.11, the typical wireless local area networks (WLANs) communication protocol. For a successful packet decoding with the backward compatibility, the frame format detection is a core precondition. This paper presents a novel frame format detection method based on a deep learning procedure for WLANs affiliated with IEEE 802.11. Considering that the detection performance of conventional methods is degraded mainly due to the poor performances in the symbol synchronization and/or channel estimation in low signal-to-noise-ratio environments, we propose a novel detection method based on convolutional neural network (CNN) that replaces the entire conventional detection procedures. The proposed deep learning network provides a robust detection directly from the receive data. Through extensive computer simulations performed in the multipath fading channel environments (modeled by Project IEEE 802.11 Task Group ac), the proposed method exhibits superb improvement in the frame format detection compared to the conventional method.

A Study on the Description Elements for the Management of Special Format Archives (특수형태 기록물 관리를 위한 기술요소에 관한 연구)

  • Park Jin-Hee;Lee Too-Young
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.39 no.1
    • /
    • pp.241-263
    • /
    • 2005
  • The purpose of this study is to investigate the description elements for the effective management of the special format archives. In this study, the existing description rules for data elements including that of both general archives and special format archives were analyzed in order to extract the core description elements for special format archives. The result of the study proposed overall description elements in accordance of the basic formats of ISAD(G).

A Method of Recovery for Damaged ZIP Files (손상된 ZIP 파일 복구 기법)

  • Jung, Byungjoon;Han, Jaehyeok;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.5
    • /
    • pp.1107-1115
    • /
    • 2017
  • The most commonly used PKZIP format is a ZIP file, as well as a file format used in MS Office files and application files for Android smartphones. PKZIP format files, which are widely used in various areas, require structural analysis from the viewpoint of digital forensics and should be able to recover when files are damaged. However, previous studies have focused only on recovering data or extracting meaningful data using the Deflate compression algorithm used in ZIP files. Although most of the data resides in compressed data in the ZIP file, there is also forensically meaningful data in the rest of the ZIP file, so you need to restore it to a normal ZIP file format. Therefore, this paper presents a technique to recover a damaged ZIP file to a normal ZIP file when given.

MARC FORMAT Implementation on the Development of An Online Catalog For Machine-Readable Data Files (컴퓨터 소프트웨어 및 화일들을 위한 온라인 목록 개발시 MARC 형식의 적용 방안)

  • Moon, Gee-Ju
    • IE interfaces
    • /
    • v.4 no.2
    • /
    • pp.93-101
    • /
    • 1991
  • One of the major problems on the design of an online database for machine readable data file is on the implementation of MARC format for communication with the Library of Congress or OCLC. Most of the cataloging data to make manual card catalogs are stored on magnetic tapes based on the MARC format at LC or OCLC and are sent to local libraries. Therefore, local libraries can avoid the expensive process of cataloging for the books they own. Instead, they can retrieve the necessary cataloging information from the tape and print out manual card catalogs. A problem with MARC is that it is not designed for databases, but for portability to be read at any type of computer. Therfore, it is not practical to use the format on the development of an online database as long as the database is developed in conjunction with a commercial powerful database package. In this paper a possible methodology to resolve the conflicts between the objective of DBMS and MARC is discussed. It is to satisfy the requirements from a commercial DBMS while leaving a room for MARC to communicate with LC and OCLC.

  • PDF

A Framework for Internet of Things (IoT) Data Management

  • Kim, Kyung-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.3
    • /
    • pp.159-166
    • /
    • 2019
  • The collection and manipulation of Internet of Things (IoT) data is increasing at a fast pace and its importance is recognized in every sector of our society. For efficient utilization of IoT data, the vast and varied IoT data needs to be reliable and meaningful. In this paper, we propose an IoT framework to realize this need. The IoT framework is based on a four layer IoT architecture onto which context aware computing technology is applied. If the collected IoT data is unreliable it cannot be used for its intended purpose and the whole service using the data must be abandoned. In this paper, we include techniques to remove uncertainty in the early stage of IoT data capture and collection resulting in reliable data. Since the data coming out of the various IoT devices have different formats, it is important to convert them into a standard format before further processing, We propose the RDF format to be the standard format for all IoT data. In addition, it is not feasible to process all captured Iot data from the sensor devices. In order to decide which data to process and understand, we propose to use contexts and reasoning based on these contexts. For reasoning, we propose to use standard AI and statistical techniques. We also propose an experiment environment that can be used to develop an IoT application to realize the IoT framework.

Development of HDF Browser for the Utilization of EOC Imagery

  • Seo, Hee-Kyung;Ahn, Seok-Beom;Park, Eun-Chul;Hahn, Kwang-Soo;Choi, Joon-Soo;Kim, Choen
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.1
    • /
    • pp.61-69
    • /
    • 2002
  • The purpose of Electro-Optical Camera (EOC), the primary payload of KOMPSAT-1, is to collect high resolution visible imagery of the Earth including Korean Peninsula. EOC images will be distributed to the public or many user groups including government, public corporations, academic or research institutes. KARI will offer the online service to the users through internet. Some application, e.g., generation of Digital Elevation Model (DEM), needs a secondary data such as satellite ephemeris data, attitude data to process the EOC imagery. EOC imagery with these ancillary information will be distributed in a file of Hierarchical Data Format (HDF) file formal. HDF is a physical file format that allows storage of many different types of scientific data including images, multidimensional data arrays, record oriented data, and point data. By the lack of public domain softwares supporting HDF file format, many public users may not access EOC data without difficulty. The purpose of this research is to develop a browsing system of EOC data for the general users not only for scientists who are the main users of HDF. The system is PC-based and huts user-friendly interface.

An effective detection method for hiding data in compound-document files (복합문서 파일에 은닉된 데이터 탐지 기법에 대한 연구)

  • Kim, EunKwang;Jeon, SangJun;Han, JaeHyeok;Lee, MinWook;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.6
    • /
    • pp.1485-1494
    • /
    • 2015
  • Traditionally, data hiding has been done mainly in such a way that insert the data into the large-capacity multimedia files. However, the document files of the previous versions of Microsoft Office 2003 have been used as cover files as their structure are so similar to a File System that it is easy to hide data in them. If you open a compound-document file which has a secret message hidden in it with MS Office application, it is hard for users who don't know whether a secret message is hidden in the compound-document file to detect the secret message. This paper presents an analysis of Compound-File Binary Format features exploited in order to hide data and algorithms to detect the data hidden with these exploits. Studying methods used to hide data in unused area, unallocated area, reserved area and inserted streams led us to develop an algorithm to aid in the detection and examination of hidden data.

Bio-Sensing Convergence Big Data Computing Architecture (바이오센싱 융합 빅데이터 컴퓨팅 아키텍처)

  • Ko, Myung-Sook;Lee, Tae-Gyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • Biometric information computing is greatly influencing both a computing system and Big-data system based on the bio-information system that combines bio-signal sensors and bio-information processing. Unlike conventional data formats such as text, images, and videos, biometric information is represented by text-based values that give meaning to a bio-signal, important event moments are stored in an image format, a complex data format such as a video format is constructed for data prediction and analysis through time series analysis. Such a complex data structure may be separately requested by text, image, video format depending on characteristics of data required by individual biometric information application services, or may request complex data formats simultaneously depending on the situation. Since previous bio-information processing computing systems depend on conventional computing component, computing structure, and data processing method, they have many inefficiencies in terms of data processing performance, transmission capability, storage efficiency, and system safety. In this study, we propose an improved biosensing converged big data computing architecture to build a platform that supports biometric information processing computing effectively. The proposed architecture effectively supports data storage and transmission efficiency, computing performance, and system stability. And, it can lay the foundation for system implementation and biometric information service optimization optimized for future biometric information computing.