• Title/Summary/Keyword: 로그관리

Search Result 566, Processing Time 0.025 seconds

A Study on Process Design for Applying the National R&D Projects of Governmental Department (NTIS 범부처 국가R&D과제 신청 프로세스 설계)

  • Han, Heejun;Kim, Yunjeong;Choi, Heeseok;Kim, Jaesoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.587-590
    • /
    • 2014
  • 국가R&D사업 관리를 위해 17개 부처 청은 각각의 과제관리 대표전문기관을 지정하고 있으며, 16개 대표전문기관은 매년 국가R&D 과제를 발주하고 예산을 집행하며, 협약된 과제에 대한 성과를 관리하고 있다. 국가R&D 과제를 발주하기 위해 먼저 사업에 대한 공고를 시행하는데 대부분의 부처 및 대표전문기관은 온라인 시스템을 이용한다. 대표전문기관은 각각의 연구관리시스템을 운영하여 R&D과제 공고정보를 게제하고 연구자는 해당 시스템에 로그인하여 과제를 신청한다. 이 때 과제신청을 하고자하는 연구자는 원하는 공고정보를 찾고 과제를 신청하기 위해 산재된 연구관리시스템을 접근하여 원하는 정보를 탐색해야 하는 불편함이 존재한다. 본 논문에서는 범부처 국가R&D과제 공고정보를 통합적으로 제공하고, 산재된 연구관리시스템에 개별적으로 접근하지 않고도 과제신청을 효율적으로 수행할 수 있는 방안을 제시한다. 이기종간의 로그인 방안과 과제신청 프로세스, 신청된 관제정보를 효율적으로 관리, 제공하는 방법에 대해 논하며, 이는 국가R&D과제를 수행하고자 하는 연구자에게 과제 신청 방법의 효율성을 제공한다.

Log Collection Method for Efficient Management of Systems using Heterogeneous Network Devices (이기종 네트워크 장치를 사용하는 시스템의 효율적인 관리를 위한 로그 수집 방법)

  • Jea-Ho Yang;Younggon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.119-125
    • /
    • 2023
  • IT infrastructure operation has advanced, and the methods for managing systems have become widely adopted. Recently, research has focused on improving system management using Syslog. However, utilizing log data collected through these methods presents challenges, as logs are extracted in various formats that require expert analysis. This paper proposes a system that utilizes edge computing to distribute the collection of Syslog data and preprocesses duplicate data before storing it in a central database. Additionally, the system constructs a data dictionary to classify and count data in real-time, with restrictions on transmitting registered data to the central database. This approach ensures the maintenance of predefined patterns in the data dictionary, controls duplicate data and temporal duplicates, and enables the storage of refined data in the central database, thereby securing fundamental data for big data analysis. The proposed algorithms and procedures are demonstrated through simulations and examples. Real syslog data, including extracted examples, is used to accurately extract necessary information from log data and verify the successful execution of the classification and storage processes. This system can serve as an efficient solution for collecting and managing log data in edge environments, offering potential benefits in terms of technology diffusion.

Workload-Driven Adaptive Log Block Allocation for Efficient Flash Memory Management (효율적 플래시 메모리 관리를 위한 워크로드 기반의 적응적 로그 블록 할당 기법)

  • Koo, Duck-Hoi;Shin, Dong-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.2
    • /
    • pp.90-102
    • /
    • 2010
  • Flash memory has been widely used as an important storage device for consumer electronics. For the flash memory-based storage systems, FTL (Flash Translation Layer) is used to handle the mapping between a logical page address and a physical page address. Especially, log buffer-based FTLs provide a good performance with small-sized mapping information. In designing the log buffer-based FTL, one important factor is to determine the mapping structure between data blocks and log blocks, called associativity. While previous works use static associativity fixed at the design time, we propose a new log block mapping scheme which adjusts associativity based on the run-time workload. Our proposed scheme improves the I/O performance about 5~16% compared to the static scheme by adjusting the associativity to provide the best performance.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

PC Audit and Forensics using Active Directory (Active Directory를 이용한 PC 감사 및 포렌식)

  • Lee, Yu-Bin;Lee, Seong-Won;Cho, Taenam
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.212-215
    • /
    • 2019
  • Active Directory(AD)는 윈도우즈 환경 하에서 LDAP 디렉터리 서비스나 Keberos 기반의 컴퓨터 인증 등을 제공한다. 본 논문에서는 AD의 감사 기능을 이용하여 여러 컴퓨터들을 하나의 서버에서 로그를 관리하고 감사할 수 있는 2가지 방안을 제시한다. 이러한 로그를 이용하여 특정 컴퓨터에 대한 디지털 포렌식에 활용할 수 있을 것이다.

Design and Evaluation of a Personalized Search Service Model Based on Web Portal User Activities (웹 포털 이용자 로그 데이터에 기반한 개인화 검색 서비스 모형의 설계 및 평가)

  • Lee, So-Young;Chung, Young-Mee
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.4 s.62
    • /
    • pp.179-196
    • /
    • 2006
  • This study proposes an expanded model of personalized search service based on community activities on a Korean Web portal. The model is composed of defining subject categories of users, providing personalized search results, and recommending additional subject categories and queries. Several experiments were performed to verify the feasibility and effectiveness of the proposed model. It was found that users' activities on community services provide valuable data for identifying their Interests, and the personalized search service increases users' satisfaction.

Information Seeking Behavior of Shopping Site Users: A Log Analysis of Popshoes, a Korean Shopping Search Engine (이용자들의 쇼핑 검색 행태 분석: 팝슈즈 로그 분석을 중심으로)

  • Park, Soyeon;Cho, Kihun;Choi, Kirin
    • Journal of the Korean Society for information Management
    • /
    • v.32 no.4
    • /
    • pp.289-305
    • /
    • 2015
  • This study aims to investigate information seeking behavior of Popshoes users. Transaction logs of Popshoes, a major Korean shopping search engine, were analyzed. These transaction logs were collected over 3 months period, from January 1 to March 31, 2015. The results of this study show that Popshoes users behave in a simple and passive way. In the total sessions, more users chose to browse a directory than typing and submitting a query. However, queries played a more crucial role in important decision makings such as search results clicks and product purchases than directory browsing. The results of this study can be implemented to the effective development of shopping search engines.

Analyzing Patterns in News Reporters' Information Seeking Behavior on the Web (기자직의 웹 정보탐색행위 패턴 분석)

  • Kwon, Hye-Jin;Jeong, Dong-Youl
    • Journal of the Korean Society for information Management
    • /
    • v.27 no.4
    • /
    • pp.109-130
    • /
    • 2010
  • The purpose of this study is to identify th patterns in the news reporters' information seeking behaviors by observing their web activities. For this purpose, transaction logs collected from 23 news reporters were analyzed. Web tracking software was installed to collect the data from their PCs, and a total of 39,860 web logs were collected in two weeks. Start and end pattern of sessions, transitional pattern by step, sequence rule model was analyzed and the pattern of Internet use was compared with the general public. the analysis of pattern derived a web information seeking behavior modes that consists of four types of behaviors: fact-checking browsing, fact-checking search, investigative browsing and investigative search.

An Efficient Scheme of Performing Pending Actions for the Removal of Datavase Files (데이터베이스 파일의 삭제를 위한 미처리 연산의 효율적 수행 기법)

  • Park, Jun-Hyun;Park, Young-Chul
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.494-511
    • /
    • 2001
  • In the environment that database management systems manage disk spaces for storing databases directly, this paper proposes a correct and efficient scheme of performing pending actions for the removal of database files. As for performing pending actions, upon performing recovery, the recovery process must identify unperformed pending actions of not-yet-terminated transactions and then perform those actions completely. Making the recovery process identify those actions through the analysis of log records in the log file is the basic idea of this paper. This scheme, as an extension of the execution of transactions, fuzzy checkpoint, and recovery of ARIES, uses the following methods: First, to identify not-yet-terminated transactions during recovery, transactions perform pending actions after writing 'pa_start'log records that signify both the commit of transactions and the start of executing pending actions, and then write 'eng'log records. Second, to restore pending-actions-lists of not-yet-terminated transactions during recovery, each transaction records its pending-actions-list in 'pa_start'log record and the checkpoint process records pending-actions-lists of transactions that are decided to be committed in 'end_chkpt'log record. Third, to identify the next pending action to perform during recovery, whenever a page is updated during the execution of pending actions, transactions record the information that identifies the next pending action to perform in the log record that has the redo information against the page.

  • PDF