• Title/Summary/Keyword: Record file management

Search Result 40, Processing Time 0.026 seconds

A Study on the Structure of Headings in Authority Records (전거레코드 표목의 구조화 연구 - 인명과 단체명 전거레코드의 표목을 중심으로 -)

  • Kim, Tae-Soo;Kim, Lee-Kyum;Lee, Hye-Won;Kim, Yong-Kwang;Park, Zi-Young
    • Journal of Information Management
    • /
    • v.40 no.3
    • /
    • pp.1-21
    • /
    • 2009
  • This study aims to suggest some idea for construction of headings in authority records to improve conventional method for authority control. The reference structure between established form and other forms was replaced by the link structure based on access points and adopting standard authority numbers was considered. Additional elements such as work information to distinguish homonym and notational system of the headings to promote sharing of authority records were also addressed. Authority records management system was constructed to test structure of headings suggested in this study, too. Through this research, we confirmed that management, identification, and sharing of authority records were considerably improved compared with the conventional authority control system.

Carving deleted voice data in mobile (삭제된 휴대폰 음성 데이터 복원 방법론)

  • Kim, Sang-Dae;Byun, Keun-Duck;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.1
    • /
    • pp.57-65
    • /
    • 2012
  • People leave voicemails or record phone conversations in their daily cell phone use. Sometimes important voice data is deleted by the user accidently, or purposely to cover up criminal activity. In these cases, deleted voice data must be able to be recovered for forensics, since the voice data can be used as evidence in a criminal case. Because cell phones store data that is easily fragmented in flash memory, voice data recovery is very difficult. However, if there are identifiable patterns for the deleted voice data, we can recover a significant amount of it by researching images of it. There are several types of voice data, such as QCP, AMR, MP4, etc.. This study researches the data recovery solutions for EVRC codec and AMR codec in QCP file, Qualcumm's voice data format in cell phone.

A Study on the Application of Registration Data to the Description of the Records and Archives (기록물 기술을 위한 등록정보의 활용에 관한 연구)

  • 방효순
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.35 no.4
    • /
    • pp.25-50
    • /
    • 2001
  • The purpose of this study Is to investigate the possibility and the limit of the application of the registration data to the description of records and archives by analyzing of the elements in the registration of records prescribed in the Record and Archives Management Act of Korea. Registration data provides the descriptive information of records at item or file levels. And also the registration data makes an important role of providing the primary descriptive information for progressively describing the series and group level of archives by archivist, which eventually conform the principle of the multi-level description. Therefore the registration of records is one of the management process of records which makes records to be a manageable objects. In addition to that the registration of records is the first level in building the multi-level description of records and archives.

  • PDF

A Study of Metadata for Composite Electronic Records Archiving: With a Focus on Digital Components of E-Learning Contents (복합전자기록물 아카이빙을 위한 메타데이터에 관한 연구 - 이러닝 콘텐츠의 디지털 컴포넌트를 중심으로 -)

  • Lee, Inhyeok;Park, Heejin
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.17 no.3
    • /
    • pp.115-138
    • /
    • 2017
  • Electronic record types are becoming diverse, and "composite electronic records," which are made up of various types of electronic records associated with functionality or user interaction that does not exist in current electronic document formats, are increasing. To ensure a continuous access to composite electronic records, metadata construction is a prerequisite for electronic records archiving. In this paper, we propose a metadata that can support archiving of composite electronic records associated with interactive functionality. The common elements were derived from an analysis of both domestic and international file format registry projects, and metadata elements related to functional requirements were identified from the analysis of the records on nursing education e-learning contents. We proposed the metadata elements for archiving composite electronic records, which consist of 25 high-level elements and 138 subelements.

The Current Status of Arrangement and the Direction of Rearrangement of the Archives Relating to the Korean Provisional Assembly (임시의정원 관련 기록물의 정리 현황과 재정리 방향)

  • Park, Dowon
    • The Korean Journal of Archival Studies
    • /
    • no.73
    • /
    • pp.161-188
    • /
    • 2022
  • This article explores the current status of arrangement of the archives relating to the Korean Provisional Assembly held by the National Assembly Library and suggests the direction of rearrangement focusing on the principles of arrangement. The Korean Provisional Assembly had records management regulations, and records were produced and stored according to them. However, the archives lost their original order at some point. The National Assembly Library collected and managed them in the 1960s. The National Assembly Library did not fully consider the records management system at the time of record production and various situations that may occur during the storage process while organizing the collected archives. At that time, the National Assembly Library did not follow the records management regulations of the Korean Provisional Assembly. In addition, the hierarchical structure of archives was not applied during the arrangement, and the National Assembly Library arranged without considering the Principal of Provenance and the Principle of original order. As a result, it became difficult to understand the structure and context of the archives. In order to solve these problems and come up with a plan for rearranging the archives, first of all, it is necessary to examine the characteristics of the records related to the Korean Provisional Assembly in accordance with the principles of record arrangement. First, according to the Principal of Provenance, it is necessary to identify the organization, function, and records and classify the records item, records file, creators, dates of creation, types of records etc. Second, by applying the Principle of original order, it is necessary to understand what the order of records was at the time when records were created and preserved. Third, it is necessary to examine whether the records are completely created and valid. It is impossible to completely arrange the archives related to the Korean Provisional Assembly as it was in the past. However, by examining the current state of arrangement and the direction of rearrangement, it will be possible to newly understand the contents, structure, and context of the archives and create a basis for effective reference service.

Design and Implementation of Middleware supporting translation of EDI using XML (XML기반의 EDI 문서교환을 위한 미들웨어 설계 및 구현)

  • Choi, Gwang-Mi;Park, Su-Young;Jung, Chai-Yeoung
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.845-852
    • /
    • 2002
  • Electronic document processing using EDl (Electronic Data Interchange) must exchange documents using VAN (Value Added Network). However. the use of exclusive software needs alteration of a new document and the use of VAN(Value Added Network) demands an exchange of document and high cost for maintenance. Due to these problems, the existing EDI (Electronic Data Interchange) is turning into Web-based EDI (Electronic Data Interchange). This paper suggests techniques that change EDI (Electronic Data Interchange) messages which exist in two relational databases into XML (extensible Markeup Language) using the JDBC bridge. Also this paper proposes a method that recovers schema using converted XML (extensible Markeup Language) file, and a method which inserts an original record into a declared table. This solves the limitation of an original method that have to use sane database management system and also overcomes the problem in certain circumstances where the EDI (Electronic Data Interchange) exchange does not work.

Comparison of Remaining Data According to Deletion Events on Microsoft SQL Server (Microsoft SQL Server 삭제 이벤트의 데이터 잔존 비교)

  • Shin, Jiho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.2
    • /
    • pp.223-232
    • /
    • 2017
  • Previous research on data recovery in Microsoft SQL Server has focused on restoring data based on in the transaction log that might have deleted records exist. However, there was a limit that was not applicable if the related transaction log did not exist or the physical database file was not connected to Server. Since the suspect in the crime scene may delete the data records using a different deletion statements besides "delete", we need to check the remaining data and a recovery possibility of the deleted record. In this paper, we examined the changes "Page Allocation information" of the table, "Unallocation deleted data", "Row Offset Array" in the page according to "delete", "truncate" and "drop" events. Finally it confirmed the possibility of data recovery and availability of management tools in Microsoft SQL Server digital forensic investigation.

PDA Transmission of Medical Images by CDMA (CDMA에 의한 의료영상의 PDA전송)

  • Lee, Myong-Ho;Lim, Jae-Dong;Ahn, Bung-Ju;Lee, Hwun-Jae;Lee, Sang-Bock
    • Journal of the Korean Society of Radiology
    • /
    • v.1 no.2
    • /
    • pp.13-22
    • /
    • 2007
  • The purpose of this study was to survey a development of the wireless transmission system of medical images for ubiquitous medicine. There have been many changes in medical equipments and medical record medical treatment and medical record within hospital and PACS(Picture Archiving Communication System) which is picture management system for patients can be typical cases. It is difficult to use these automated medical systems unless they are within hospital and in case of rapid image reading in the emergency cases or in absence of doctor, it is difficult to perform it immediately. The present study implemented an image transmission system using CDMA connection so that images in the server can be viewed at any time and in any place. Remote wireless diagnosis based on medical images using PDA is applicable to medical areas that require mobility, and the use of PDA can be an ideal alternative for point of care. The use of PDA enables prompt and accurate access to digital medical images, which in turn reduces medical accidents and improves the quality of medical services through high productivity and efficiency of medical practitioners' works. It also enables quick response to patients' demands and high-quality medical services and, consequently, patients' high satisfaction.

  • PDF

Status of the Constitutional Court Records Management and Improvement (헌법재판소 기록관리현황과 개선방안)

  • Lee, Cheol-Hwan;Lee, Young-Hak
    • The Korean Journal of Archival Studies
    • /
    • no.38
    • /
    • pp.75-124
    • /
    • 2013
  • This study aims, by paying attention to the special values of records of Constitutional Court, to discuss the characteristics of them and figuring out their present state, and to suggest some measures for improvement in the records management. First of all, I defined the concept of the records of Constitutional Court and its scope, and made an effort to comprehend their types and distinct features, and on the basis of which I tried to grasp the characteristics of the records. Put simply, the records of Constitutional Court are essential records indispensible to the application of Constitutional Court's documentation strategy of them, and they are valuable particularly at the level of the taking-root of democracy and the guarantee of human rights in a country. Owing to their characteristics of handling nationally important events, also, the context of the records is far-reaching to the records of other constitutional institutions and administrations, etc. In the second place, I analyzed Records Management Present State. At a division stage, I grasped the present state of creation, registration, and classification system of records. At an archives repository stage, I made efforts to figure out specifically the perseveration of records and the present of state of using them. On the basis of such figuring-outs of the present situation of records of Constitutional Court, I pointed at problems in how to manage them and suggested some measures to improve it in accordance with the problems, by dividing its process into four, Infrastructure, Process, Opening to the public and Application. In the infrastructure process, after revealing problems in its system, facilities, and human power, I presented some ways to improve it. In terms of its process, by focusing on classification and appraisal, I pointed out problems in them and suggested alternatives. In classification, I suggested to change the classification structure of trial records; in appraisal, I insisted on reconsidering the method of appropriating the retention periods of administration records, for it is not correspondent with reality in which, even in an file of a event, there are several different retention periods so it is likely for the context of the event worryingly to be segmented. In opening to the public and application, I pointed at problems in information disclosure at first, and made a suggestion of the establishment of a wide information disclosure law applicable to all sort of records. In application, I contended the expansion of the possibility of application of records and the scope of them through cooperation with other related-institutions.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.