• Title/Summary/Keyword: File Management Service

Search Result 127, Processing Time 0.027 seconds

Usability Testing of Open Source Software for Digital Archiving (디지털 아카이브 구축을 위한 공개 소프트웨어 사용성 평가)

  • Jeon, Kyungsun;Chang, Yunkeum
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.52 no.3
    • /
    • pp.247-271
    • /
    • 2018
  • This research aims to explore the possibility of open source software for creating digital archives of small organizations or ordinary people that run short of budget and professional workforce and may easily create digital archives without the help of a professional. To do so, this study suggested three open source software, AtoM, ArchivesSpace, and Omeka, for such use, and conducted usability tests with system designers and users who had no experience with open source software. The results of the usability testing was that AtoM, which was developed to support the records management system and user services of small organizations, proved satisfactory to both system designers and users. ArchivesSpace had too many required fields with it to create archives. Omeka greatly satisfied the system designers because it is possible to create archives with simple inputs on the item level. However, Omeka, which focuses on exhibition functions while neglecting search functions, registered low satisfaction among the users. Based on the results of the usability testing, this study suggested selection criteria of open source software for small organizations or ordinary individuals, namely, purposes, license, characteristics, service creation environment, advantages and disadvantages, functions, metadata, file type, and interoperability.

A Study on the Validation of Vector Data Model for River-Geospatial Information and Building Its Portal System (하천공간정보의 벡터데이터 모델 검증 및 포털 구축에 관한 연구)

  • Shin, Hyung-Jin;Chae, Hyo-Sok;Hwang, Eui-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.2
    • /
    • pp.95-106
    • /
    • 2014
  • In this study, the applicability of a standard vector model was evaluated using RIMGIS vector data and a portal based river-geospatial information web service system was developed using XML and JSON based data linkage between the server and the client. The RIMGIS vector data including points, lines, and polygons were converted to the Geospatial Data Model(GDM) developed in this study and were validated by layers. After the conversion, it was identified that the attribute data of a shape file remained without loss. The GeoServer GDB(GeoDataBase) that manages a DB in the portal was developed as a management module. The XML-based Geography Markup Language(GML) standards of OGC was used for accessing to and managing vector layers and encoding spatial data. The separation of data content and expression in the GML allowed the different expressions of the same data, convenient data revision and update, and enhancing the expandability. In the future, it is necessary to improve the access, exchange, and storage of river-geospatial information through the user's customized services and Internet accessibility.

The Automation Model of Ransomware Analysis and Detection Pattern (랜섬웨어 분석 및 탐지패턴 자동화 모델에 관한 연구)

  • Lee, Hoo-Ki;Seong, Jong-Hyuk;Kim, Yu-Cheon;Kim, Jong-Bae;Gim, Gwang-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.8
    • /
    • pp.1581-1588
    • /
    • 2017
  • Recently, circulating ransomware is becoming intelligent and sophisticated through a spreading new viruses and variants, targeted spreading using social engineering attack, malvertising that circulate a large quantity of ransomware by hacking advertising server, or RaaS(Ransomware-as-a- Service), from the existing attack way that encrypt the files and demand money. In particular, it makes it difficult to track down attackers by bypassing security solutions, disabling parameter checking via file encryption, and attacking target-based ransomware with APT(Advanced Persistent Threat) attacks. For remove the threat of ransomware, various detection techniques are developed, but, it is very hard to respond to new and varietal ransomware. Accordingly, in this paper, find out a making Signature-based Detection Patterns and problems, and present a pattern automation model of ransomware detecting for responding to ransomware more actively. This study is expected to be applicable to various forms in enterprise or public security control center.

Impact of a 'Proactive Self-Audit Program of Fraudulent Claims' on Healthcare Providers' Claims Patterns: Intravenous Injections (KK020) (부당청구 예방형 자율점검제가 의료기관의 청구행태에 미치는 영향: 정맥 내 일시주사(KK020)를 중심으로)

  • Hee-Hwa Lee;Young-Joo Won;Kwang-Soo Lee;Ki-Bong Yoo
    • Health Policy and Management
    • /
    • v.34 no.2
    • /
    • pp.163-177
    • /
    • 2024
  • Background: This study aims to examine changes in fraudulent claim counts and total reimbursements before and after enhancements in counterfeit claim controls and monitoring of provider claim patterns under the "Proactive self-audit pilot program of fraudulent claims." Methods: This study used the claims data and hospital information (July 2021-February 2022) of the Health Insurance Review and Assessment Service. The data was collected from 1,129 hospitals assigned to the pilot program, selected from the providers who filed a claim for reimbursement for intravenous injections. Paired and independent t-tests, along with regression analysis, were utilized to analyze changing patterns and factors influencing claim behaviors. Results: This program led to a reduction in the number of fraudulent claims and the total amount of reimbursements across all levels of hospitals in the experimental groups (except for physicians below 40 years old). In the control group, general hospitals and hospitals demonstrated some significant decreases based on the duration since opening, while clinics showed significant reductions in specified subjects. Additionally, a notable increase was observed among male physicians over the age of 50 years. Overall, claims and reimbursements significantly declined after the intervention. Furthermore, a positive correlation was found between hospital opening duration and claim numbers, suggesting longer-established hospitals were more likely to file claims. Conclusion: The results indicate that the pilot program successfully encouraged providers to autonomously minimize fraudulent claims. Therefore, it is advised to extend further support, including promotional activities, training, seminars, and continuous monitoring, to nonparticipating hospitals to facilitate independent improvements in their claim practices.

The Current Status of Arrangement and the Direction of Rearrangement of the Archives Relating to the Korean Provisional Assembly (임시의정원 관련 기록물의 정리 현황과 재정리 방향)

  • Park, Dowon
    • The Korean Journal of Archival Studies
    • /
    • no.73
    • /
    • pp.161-188
    • /
    • 2022
  • This article explores the current status of arrangement of the archives relating to the Korean Provisional Assembly held by the National Assembly Library and suggests the direction of rearrangement focusing on the principles of arrangement. The Korean Provisional Assembly had records management regulations, and records were produced and stored according to them. However, the archives lost their original order at some point. The National Assembly Library collected and managed them in the 1960s. The National Assembly Library did not fully consider the records management system at the time of record production and various situations that may occur during the storage process while organizing the collected archives. At that time, the National Assembly Library did not follow the records management regulations of the Korean Provisional Assembly. In addition, the hierarchical structure of archives was not applied during the arrangement, and the National Assembly Library arranged without considering the Principal of Provenance and the Principle of original order. As a result, it became difficult to understand the structure and context of the archives. In order to solve these problems and come up with a plan for rearranging the archives, first of all, it is necessary to examine the characteristics of the records related to the Korean Provisional Assembly in accordance with the principles of record arrangement. First, according to the Principal of Provenance, it is necessary to identify the organization, function, and records and classify the records item, records file, creators, dates of creation, types of records etc. Second, by applying the Principle of original order, it is necessary to understand what the order of records was at the time when records were created and preserved. Third, it is necessary to examine whether the records are completely created and valid. It is impossible to completely arrange the archives related to the Korean Provisional Assembly as it was in the past. However, by examining the current state of arrangement and the direction of rearrangement, it will be possible to newly understand the contents, structure, and context of the archives and create a basis for effective reference service.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

문헌검색(文獻檢索)에 있어서 Chemical Abstracts와 CA Condensates의 비교(比較)

  • Robert, B.E.
    • Journal of Information Management
    • /
    • v.9 no.1
    • /
    • pp.21-25
    • /
    • 1976
  • 1975년(年) 3월(月), 4년반(年半) 동안의 Chemical Abstracts 색인(索引)과 온-라인이 가능(可能)한 CA Condensates를 비교(比較)하였다. 두가지 데이터 베이스를 함께 이용(利用)하여 검색(檢索)하는 방법(方法)이 가장 효율적(效率的)이지만 실예(實例)에서 보는 바와 같이 CA Condensates를 검색(檢索)하는 것이 보다 실용적(實用的)이다. System Development Corp 사(社) (SDC)에 설치(設置)되어 있는 온-라인 형태(形態)인 CHEMCON과 CHEM7071을 Chemical Abstracts 색인(索引)과 비교(比較)하였다. 대부분(大部分)의 Chemical Abstracts 이용자(理容者)들은 Chemical Abstracts 책자나 우가색인(累加索引)에는 친숙(親熟)하지만 CA Condensates는 아마도 그리 친숙(親熟)하지 못할 것이다. CA Condensates는 서지적 사항을 기계(機械)로 읽을 수 있는 형태(形態)로 되어 있고 Chemical Abstracts에 따라서 색인(索引)되므로 매주 발행되는 Chemical Abstracts 책자의 뒷 부분이 있는 색인(索引)과 같이 우리에게 가장 친숙(親熟)한 형태(形態)로 되어 있다. Chemical Abstracts가 현재(現在) 사용(使用)하고 있는 데이터 데이스이지만 본고(本稿)에서는 Index와 Condensates를 둘 다 데이터 베이스로 정의(定義)한다. Condensates가 미국(美國)의 Chemical Abstracts Service 기관으로부터 상업적(商業的)으로 이용(利用)할 수 있게 되자 여러 정보(情報)센터에서는 이용자(利用者)들의 프로 파일을 뱃취방식(方式)으로 처리(處理)하여 매주 나오는 자기(磁氣)테이프에서 최신정보(最新情報)를 검색(檢索)하여 제공(提供)하는 서어비스 (SDI)를 시작하였다. 어떤 정보(情報)센터들은 지나간 자기(磁氣)테이프들을 모아서 역시 뱃취방식(方式)으로 소급(遡及) 문헌검색(文獻檢索) 서어비스를 한다. 자기(磁氣)테이프를 직접 취급(取扱)하는 사람들을 제외(除外)하고는 대부분(大部分) Condensates를 아직 잘 모르고 있다. 소급(遡及) 문헌검색(文獻檢索)은 비용이 다소 비싸고 두서없이 이것 저것 문헌(文獻)을 검색(檢索)하는 방법(方法)은 실용적(實用的)이 못된다. 매주 나오는 색인(索引)에 대해서 두 개나 그 이상의 개념(槪念)이나 물질(物質)을 조합(組合)하여 검색(檢索)하는 방법(方法)은 어렵고 실용적(實用的)이 못된다. 오히려 주어진 용어(用語) 아래에 있는 모든 인용어(引用語)들을 보고 초록(抄錄)과의 관련성(關連性)을 결정(決定)하는 것이 때때로 더 쉽다. 상호(相互) 작용(作用)하는 온-라인 검색(檢索)을 위한 Condensates의 유용성(有用性)은 많은 변화를 가져 왔다. 필요(必要)한 문헌(文獻)만을 검색(檢索)해 보는 것이 이제 가능(可能)하고 어떤 항목(項目)에 대해서도 완전(完全)히 색인(索引)할 수 있게 되었다. 뱃취 시스팀으로는 검색(檢索)을 시작해서 그 결과(結果)를 받아 볼 때 까지 수시간(數時間)에서 며칠까지 걸리는 번거로운 시간차(時間差)를 이제는 보통 단 몇 분으로 줄일 수 있다. 그리고 뱃취 시스팀과는 달리 부정확하거나 불충분한 검색방법(檢索方法)은 즉시 고칠 수가 있다. 연속적인 뱃취 형태의 검색방법(檢索方法)에 비해서 순서(順序)없이 온-라인으로 검색(檢索)하는 방법(方法)이 분명(分明)하고 정확(正確)한 장점(長點)이 있다. CA Condensates를 자주 이용(移用)하게 되자 그의 진정한 가치(價値)에, 대해 논의(論義)가 있었다. CA Condensates의 색인방법(索引方法)은 CA Abstract 책자나 우가색인(累加索引)의 방법(方法)보다 확실히 덜 체계적(體系的)이고 철저(徹底)하지 못하다. 더우기 두 데이터 베이스는 중복(重複)것이 많으므로, 중복(重複)해서 검색(檢索)할 가치(價値)가 없는지를 결정(決定)해야 한다. 다른 몇 개의 데이터 베이스와 CA Condensates를 비교(比較)한 논문(論文)들이 여러 번 발표(發表)되어 왔는데 일반적(一般的)으로 CA Condensates는 하위(下位)의 데이터 베이스로 나타났다. Buckley는 Chemical Abstracts의 색인(索引)이 CA Condensates 보다 더 좋은 문헌 (데라마이신의 제법에 관해서)을 제공(提供)한 실례(實例)를 인용(引用)하였다. 죠오지대학(大學)의 Search Center는 CA Condensates가 CA Integrated Subject File 보다 기능(機能)이 못하다는 것을 알았다. CA Condensates의 다른 여러 가지 형태(形態)들을 또한 비교(比較)하였다. Michaels은 CA Condensates를 온-라인으로 검색(檢索)한 것과 매주 나오는 Chemical Abstracts 책자의 색인(索引)은 수작업(手作業)으로 검색(檢索)한 것을 비교(比較)한 논문(論文)을 발표(發表)하였다. 그리고 Prewitt는 온-라인으로 축적(蓄積)한 두 개의 상업용(商業用) CA Condensates를 비교(比較)하였다. Amoco Research Center에서도 CA Condensates와 Chemical Abstracts 색인(索引)의 검색결과(檢索結果)를 비교(比較)하고 CA Condensates의 장점(長點)과 색인(索引)의 장점(長點), 그리고 사실상(事實上) 서로 동등(同等)하다는 실례(實例)를 발견(發見)하였다. 1975년(年) 3월(月), 적어도 4년분(年分)의 CA Condensates와 색인(索引)(Vols 72-79, 1970-1973)을 비교(比較)하였다. 저자(著者)와 일반(一般) 주제(主題) 대한 검색(檢索)은 Vol 80 (Jan-June, 1974)을 사용(使用)하여 비교(比較)하였다. CA Condensates는 보통 세분화(細分化)된 복합물(複合物)을 검색(檢索)하는 데 불편(不便)하다. Buckly가 제시(提示)한 실례(實例)가 그 대표적(代表的)인 예(例)이다. 그러나, 다른 형태(形態)의 검색실예(檢索實例)(단체저자(團?著者), 특허수탁저(特許受託著), 개인저자(個人著者), 일반적(一般的)인/세분화(細分化)된 화합물(化合物) 그리고 반응종류(反應種類)로 실제적(實際的)인 검색(檢索)을 위한 CA Condensates의 이점(利點)을 예시(例示)하였다. 다음 실례(實例)에서 CHEMCON과 CHEM7071은 CA Condensates를 온-라인으로 입력(入力)시킨 것이다.

  • PDF