• Title/Summary/Keyword: 로그관리

Search Result 566, Processing Time 0.021 seconds

A Defence Algorithm against Replay Attacks in Web Applications (웹 어플리케이션에서의 재전송 공격 방어 알고리즘)

  • Won, Jong Sun;Shon, Jin Gon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.735-738
    • /
    • 2011
  • 회원가입 정보를 이용하는 대다수의 웹 어플리케이션은 쿠키 및 세션을 이용한 인증기법으로 로그인 인증을 한다. 그러나, 쿠키에 의존하여 로그인 인증을 하는 웹 어플리케이션은 재전송 공격(Replay Attack)에 취약하다. 재전송 공격이란 사용자가 로그인할 때 쿠키의 정보를 해커가 스니핑(Sniffing)하여 Opera 웹브라우저의 쿠키 관리자에서 강제로 재전송하여 위장할 수 있는 공격을 말한다. 본 논문에서는 쿠키와 세션 인증키를 이용하여 재전송 공격을 방어할 수 있는 알고리즘을 제안하였다. 그리고, 클라이언트와 서버간에 신뢰할 수 없는 통신으로 인한 스니핑의 문제점을 서버에만 저장된 암호화된 세션 인증키로 개선하였다.

Personalized Private Information Security Method on Smartphone. (스마트폰 환경에서 개인정보 보안 기법)

  • Jeong, MinKyoung;Choi, Okkyung;Yeh, HongJin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.751-754
    • /
    • 2011
  • 최근 개인이 작성한 글과 사진, 동영상 등의 자료를 시간과 장소에 따라 저장 할 수 있는 라이프 로그 서비스들이 증가하고 있다. 이러한 정보들은 개인의 일상생활을 기록하는 것으로 민감한 프라이버시임에도 불구하고 관리에 취약하다. 스마트폰 환경에서 데이터를 저장하기 위해 SQLite를 이용하고, 이를 암호화하기 위한 방안으로 SEE와 SQLCipher가 있지만 전체 데이터를 암호화하는 방식으로 중요하지 않은 데이터까지 암호화하여 저장한다. 본 논문은 개인 정보 보호를 위한 방안으로 SQLite에서 SEED 암호를 이용하여 주요한 개인 정보를 컬럼 단위로 암호화한다. 즉 라이프로그 데이터를 개인 프라이버시 중요도에 따라 분류하고, 분류된 데이터 중에서 중요한 데이터만 선택적으로 암복호화 함으로써 기존 데이터 암호화 방식에 비해 암복호화에 소모되는 연산 시간을 감소시키고 라이프로그 데이터의 개인 정보 보안을 강화시키고자 한다.

Anomaly Detection Technique of Log Data Using Hadoop Ecosystem (하둡 에코시스템을 활용한 로그 데이터의 이상 탐지 기법)

  • Son, Siwoon;Gil, Myeong-Seon;Moon, Yang-Sae
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.2
    • /
    • pp.128-133
    • /
    • 2017
  • In recent years, the number of systems for the analysis of large volumes of data is increasing. Hadoop, a representative big data system, stores and processes the large data in the distributed environment of multiple servers, where system-resource management is very important. The authors attempted to detect anomalies from the rapid changing of the log data that are collected from the multiple servers using simple but efficient anomaly-detection techniques. Accordingly, an Apache Hive storage architecture was designed to store the log data that were collected from the multiple servers in the Hadoop ecosystem. Also, three anomaly-detection techniques were designed based on the moving-average and 3-sigma concepts. It was finally confirmed that all three of the techniques detected the abnormal intervals correctly, while the weighted anomaly-detection technique is more precise than the basic techniques. These results show an excellent approach for the detection of log-data anomalies with the use of simple techniques in the Hadoop ecosystem.

A Tile-Image Merging Algorithm of Tiled-Display Recorder using Time-stamp (타임 스탬프를 이용한 타일드 디스플레이 기록기의 타일 영상 병합 알고리즘)

  • Choe, Gi-Seok;Nang, Jong-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.327-334
    • /
    • 2009
  • The tiled-display system provides a high resolution display which can be used in different applications in co-working area. The systems used in the co-working field usually save the user logs, and these log information not only makes the maintenance of the tiled-display system easier, but also can be used to check the progress of the co-working. There are three main steps in the proposed tiled display log recorder. The first step is to capture the screen shots of the tiles and send them for merging. The second step is to merge the captured tile images to form a single screen shot of the tiled-display. The final step is to encode the merged tile images to make a compressed video stream. This video stream could be stored for the logs of co-working or be streamed to remote users. Since there could be differences in capturing time of tile images, the quality of merged tiled-display could be degraded. This paper proposes a time stamp-based metric to evaluate the quality of the video stream, and a merging algorithm that could upgrade the quality of the video stream with respect to the proposed quality metrics.

Metadata Log Management for Full Stripe Parity in Flash Storage Systems (플래시 저장 시스템의 Full Stripe Parity를 위한 메타데이터 로그 관리 방법)

  • Lim, Seung-Ho
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.11
    • /
    • pp.17-26
    • /
    • 2019
  • RAID-5 technology is one of the choice for flash storage device to enhance its reliability. However, RAID-5 has inherent parity update overhead, especially, parity overhead for partial stripe write is one of the crucial issues for flash-based RAID-5 technologies. In this paper, we design efficient parity log architecture for RAID-5 to eliminate runtime partial parity overhead. During runtime, partial parity is retained in buffer memory until full stripe write completed, and the parity is written with full strip write. In addition, parity log is maintained in memory until whole the stripe group is used for data write. With this parity log, partial parity can be recovered from the power loss. In the experiments, the parity log method can eliminate partial parity writes overhead with a little parity log writes. Hence it can reduce write amplification at the same reliability.

Development of Recommendation Agents through Web Log Analysis (웹 로그 분석을 이용한 추천 에이전트의 개발)

  • 김성학;이창훈
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.621-630
    • /
    • 2003
  • Web logs are the information recorded by a web server when users access the web sites, and due to a speedy rising of internet usage, the worth of their practical use has become increasingly important. Analyzing such logs can use to determine the patterns representing users' navigational behavior in a Web site and restructure a Web site to create a more effective organizational presence. For these applications, the generally used key methods in many studies are association rules and sequential patterns based by Apriori algorithms, which are widely used to extract correlation among patterns. But Apriori inhere inefficiency in computing cost when applied to large databases. In this paper, we develop a new algorithm for mining interesting patterns which is faster than Apriori algorithm and recommendation agents which could provide a system manager with valuable information that are accessed sequentially by many users.

  • PDF

XML-based Modeling for Semantic Retrieval of Syslog Data (Syslog 데이터의 의미론적 검색을 위한 XML 기반의 모델링)

  • Lee Seok-Joon;Shin Dong-Cheon;Park Sei-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.13D no.2 s.105
    • /
    • pp.147-156
    • /
    • 2006
  • Event logging plays increasingly an important role in system and network management, and syslog is a de-facto standard for logging system events. However, due to the semi-structured features of Common Log Format data most studies on log analysis focus on the frequent patterns. The extensible Markup Language can provide a nice representation scheme for structure and search of formatted data found in syslog messages. However, previous XML-formatted schemes and applications for system logging are not suitable for semantic approach such as ranking based search or similarity measurement for log data. In this paper, based on ranked keyword search techniques over XML document, we propose an XML tree structure through a new data modeling approach for syslog data. Finally, we show suitability of proposed structure for semantic retrieval.

Analysis of Korean Patent & Trademark Retrieval Query Log to Improve Retrieval and Query Reformulation Efficiency (질의로그 데이터에 기반한 특허 및 상표검색에 관한 연구)

  • Lee, Jee-Yeon;Paik, Woo-Jin
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.2
    • /
    • pp.61-79
    • /
    • 2006
  • To come up with the recommendations to improve the patent & trademark retrieval efficiency, 100,016 patent & trademark search requests by 17,559 unique users over a period of 193 days were analyzed. By analyzing 2,202 multi-query sessions, where one user issuing two or more queries consecutively, we discovered a number of retrieval efficiency improvements clues. The session analysis result also led to suggestions for new system features to help users reformulating queries. The patent & trademark retrieval users were found to be similar to the typical web users in certain aspects especially in issuing short queries. However, we also found that the patent & trademark retrieval users used Boolean operators more than the typical web search users. By analyzing the multi-query sessions, we found that the users had five intentions in reformulating queries such as paraphrasing, specialization, generalization, alternation, and interruption, which were also used by the web search engine users.

System and Utilization for E-Catalog Classifier (전자 카탈로그 자동분류기 시스템과 그 활용)

  • Lee, Ig-Hoon;Chun, Jong-Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.9
    • /
    • pp.876-883
    • /
    • 2008
  • A clearly defined e-catalog (or product) information is a key foundation for an e-commerce system. A classification (or categorization) is a core information to build clear e-catalogs, can play an important role in quality of e-commerce systems using e-catalogs. However, as the wide use of online business transactions, the volume of e-catalog information that needs to be managed in a system has become drastically large, and the classification task of such data has become highly complex. In this paper, we present an e-catalog classifier system, and report on our effort to improve an e-catalog management process and to standardize e-catalogs for enterprises by use of automated approach for e-catalog classifier systems. Also we introduce some of the issues that we have experienced in the projects, so that our work may help those who do a similar project in the future.

MITRE ATT&CK and Anomaly detection based abnormal attack detection technology research (MITRE ATT&CK 및 Anomaly Detection 기반 이상 공격징후 탐지기술 연구)

  • Hwang, Chan-Woong;Bae, Sung-Ho;Lee, Tae-Jin
    • Convergence Security Journal
    • /
    • v.21 no.3
    • /
    • pp.13-23
    • /
    • 2021
  • The attacker's techniques and tools are becoming intelligent and sophisticated. Existing Anti-Virus cannot prevent security accident. So the security threats on the endpoint should also be considered. Recently, EDR security solutions to protect endpoints have emerged, but they focus on visibility. There is still a lack of detection and responsiveness. In this paper, we use real-world EDR event logs to aggregate knowledge-based MITRE ATT&CK and autoencoder-based anomaly detection techniques to detect anomalies in order to screen effective analysis and analysis targets from a security manager perspective. After that, detected anomaly attack signs show the security manager an alarm along with log information and can be connected to legacy systems. The experiment detected EDR event logs for 5 days, and verified them with hybrid analysis search. Therefore, it is expected to produce results on when, which IPs and processes is suspected based on the EDR event log and create a secure endpoint environment through measures on the suspicious IP/Process.