• Title/Summary/Keyword: Log File Analysis

Search Result 61, Processing Time 0.03 seconds

Design of Personalized System using an Association Rule (연관규칙을 이용한 개인화 시스템 설계)

  • Yun, Jong-Chan;Youn, Sung-Dae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.6
    • /
    • pp.1089-1098
    • /
    • 2007
  • Currently, user require is diverse on the Web. Furthermore, each web user is wishing to retrieve data or goods that hey want to look for more conveniently and more quickly. Because different search criteria and dispositions of web users, they lead to unnecessary repeated operations in order to use implemented by web designer. In this paper, we suggest the system that analyzes user patterns on the Web using the technique of log file analysis and transfers more effectively the information of web sites to users. And we analyze the log file for customer data in the system the proposed method are implemented by means of EC-Miner that is one of the tool of datamining, and aims to offer appropriate Layout corresponding with personalization by giving weight to each transport path.

The Threat Analysis and Security Guide for Private Information in Web Log (웹 로그 데이터에 대한 개인정보 위협분석 및 보안 가이드)

  • Ryeo, Sung-Koo;Shim, Mi-Na;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.6
    • /
    • pp.135-144
    • /
    • 2009
  • This paper discusses an issue of serious security risks at web log which contains private information, and suggests solutions to protect them. These days privacy is core information to produce value-added in information society. Its scope and type is expanded and is more important along with the growth of information society. Web log is a privacy information file enacted as law in South Korea. Web log is not protected properly in spite of that has private information It just is treated as residual product of web services. Many malicious people could gain private information in web log. This problem is occurred by no classified data and improper development of web application. This paper suggests the technical solutions which control data in development phase and minimizes that the private information stored in web log, and applies in operation environment. It is very efficient method to protect private information and to observe the law.

A Digital Secret File Leakage Prevention System via Hadoop-based User Behavior Analysis (하둡 기반의 사용자 행위 분석을 통한 기밀파일 유출 방지 시스템)

  • Yoo, Hye-Rim;Shin, Gyu-Jin;Yang, Dong-Min;Lee, Bong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.11
    • /
    • pp.1544-1553
    • /
    • 2018
  • Recently internal information leakage in industries is severely increasing in spite of industry security policy. Thus, it is essential to prepare an information leakage prevention measure by industries. Most of the leaks result from the insiders, not from external attacks. In this paper, a real-time internal information leakage prevention system via both storage and network is implemented in order to protect confidential file leakage. In addition, a Hadoop-based user behavior analysis and statistics system is designed and implemented for storing and analyzing information log data in industries. The proposed system stores a large volume of data in HDFS and improves data processing capability using RHive, consequently helps the administrator recognize and prepare the confidential file leak trials. The implemented audit system would be contributed to reducing the damage caused by leakage of confidential files inside of the industries via both portable data media and networks.

Simplified Forensic Analysis Using List of Deleted Files in IoT Envrionment (사물인터넷 환경에서 삭제된 파일의 목록을 이용한 포렌식 분석 간편화)

  • Lim, Jeong-Hyeon;Lee, Keun-Ho
    • Journal of Internet of Things and Convergence
    • /
    • v.5 no.1
    • /
    • pp.35-39
    • /
    • 2019
  • With the rapid development of the information society, the use of digital devices has increased dramatically and the importance of technology for analyzing them has increased. Digital evidence is stored in many places such as Prefetch, Recent, Registry, and Event Log even if the user has deleted it. Therefore, there is a disadvantage that the forensic analyst can not grasp the files used by the user at the beginning. Therefore, in this paper, we propose a method that the RemoveList folder exists so that the user can grasp the information of the deleted file first, and the information about the deleted file is automatically saved by using AES in RemoveList. Through this, it can be expected that the analyst can alleviate the difficulty of initially grasping the user's PC.

A Log Analysis Study of an Online Catalog User Interface (온라인목록 사용자 인터페이스에 관한 연구 : 탐색실패요인을 중심으로)

  • 유재옥
    • Journal of the Korean Society for information Management
    • /
    • v.17 no.2
    • /
    • pp.139-153
    • /
    • 2000
  • This article focuses on a transaction log analysis of the DISCOVER online catalog user interface at Duksung Women's University Library. The results show that the most preferred access point is the title field with rate of 59.2%. The least used access point is the author field with rate of 11.6%. Keyword searching covers only about 16% of all access points used. General failure rate of searching is 13.9% with the highest failure rate of 19.8% in the subject field and the lowest failure rate of 10.9% in author field.

  • PDF

Trends of Web-based OPAC Search Behavior via Transaction Log Analysis (트랜잭션 로그 분석을 통한 웹기반 온라인목록의 검색행태 추이 분석)

  • Lee, Sung-Sook
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.23 no.2
    • /
    • pp.209-233
    • /
    • 2012
  • In this study in order to verify the overall information seeking behavior of the Web-based OPAC users, it was analyzed transaction log file for 7 years. Regarding Web-based OPAC information seeking behavior, it was studied from the perspective of information seeking strategy and information seeking failure. In search strategy, it was analyzed search type, search options, Boolean operator, length of search text, number of uses of word, number of use Web-based OPAC, number of use by time, by week day. Also, in search failure, search failure ratio, search failure ratio by search options, search failure ratio by Boolean operator were analyzed. The result of this study is expected to be utilized for OPAC system and service improvement in the future.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Fuzzy category based transaction analysis for web usage mining (웹 사용 마이닝을 위한 퍼지 카테고리 기반의 트랜잭션 분석 기법)

  • 이시헌;이지형
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.341-344
    • /
    • 2004
  • 웹 사용 마이닝(Web usage mining)은 웹 로그 파일(web log file)이나 웹 사용 데이터(Web usage data)에서 의미 있는 정보를 찾아내는 연구 분야이다. 웹 사용 마이닝에서 일반적으로 많이 사용하는 웹 로그 파일은 사용자들이 참조한 페이지의 단순한 리스트들이다. 따라서 단순히 웹 로그 파일만을 이용하는 방법만으로는 사용자가 참조했던 페이지의 내용을 반영하여 분석하는데에는 한계가 있다. 이러한 점을 개선하고자 본 논문에서는 페이지 위주가 아닌 웹 페이지가 포함하고 있는 내용(아이템)을 고려하는 새로운 퍼지 카테고리 기반의 웹 사용 마이닝 기법을 제시한다. 또한 사용자를 잘 파악하기 위해서 시간에 따라 관심의 변화를 파악하는 방법을 제시한다.

  • PDF

A Study on CDC Analysis Using Redo-Log File (로그 파일을 이용한 CDC 분석에 관한 연구)

  • Kim, Young-Whan;Im, Yeung-Woon;Kim, Sang-Hyong;Kim, Ryong;Choi, Hyun-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.692-695
    • /
    • 2014
  • 현재와 같이 처리해야 할 데이터가 폭주하는 상황에서 대부분의 시스템은 자료 저장을 위해 데이터베이스를 사용하지만, 누적되는 데이터 관리를 위해 빈번한 문제가 발생한다. 이때 대부분의 시스템들에서는 상용버전의 데이터 백업 시스템이나 이중화 시스템 등을 두어 여러 곳에 분산 배치함으로써 데이터 보관의 안전성을 도모한다. 실제 모든 데이터베이스 시스템들은 데이터를 레코드에 기록할 때 마다 고유의 로그기록을 남겨놓게 되어있다. 로그기록들은 결국 아카이브 형태로 저장되는데, 그전에 실시간으로 로그를 남기는 과정을 거치게 된다. 본 논문에서는 현재 많은 기관 및 단체에서 사용하는 오라클 데이터베이스를 기본으로 하여, 실시간으로 로그기록을 저장하게 되는 리두 로그(Redo-Log) 파일에 대하여 알아보고, 로그기록의 절차 및 응용 가능성에 대하여 보여준다.

MapReduce-Based Partitioner Big Data Analysis Scheme for Processing Rate of Log Analysis (로그 분석 처리율 향상을 위한 맵리듀스 기반 분할 빅데이터 분석 기법)

  • Lee, Hyeopgeon;Kim, Young-Woon;Park, Jiyong;Lee, Jin-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.593-600
    • /
    • 2018
  • Owing to the advancement of Internet and smart devices, access to various media such as social media became easy; thus, a large amount of big data is being produced. Particularly, the companies that provide various Internet services are analyzing the big data by using the MapReduce-based big data analysis techniques to investigate the customer preferences and patterns and strengthen the security. However, with MapReduce, when the big data is analyzed by defining the number of reducer objects generated in the reduce stage as one, the processing rate of big data analysis decreases. Therefore, in this paper, a MapReduce-based split big data analysis method is proposed to improve the log analysis processing rate. The proposed method separates the reducer partitioning stage and the analysis result combining stage and improves the big data processing rate by decreasing the bottleneck phenomenon by generating the number of reducer objects dynamically.