• Title/Summary/Keyword: Log analysis

Search Result 2,168, Processing Time 0.04 seconds

A Study on Ground Vehicle Mechanics for Steep Slope Forest Operations - Rubber-Tired Log Skidding Tractor Operations - (급경사지 산림작업을 위한 차량의 역학분석에 관한 연구 -차륜형 집재작업 트랙터를 중심으로-)

  • Chung, Joo Sang;Chung, Woo Dam
    • Journal of Korean Society of Forest Science
    • /
    • v.84 no.2
    • /
    • pp.218-225
    • /
    • 1995
  • In this paper, a mechanical analysis model for steep-slope log-skidding operations of a rubber-tired tractor is discussed and the applicability of the model is investigated. The model largely consists of mathematical analysis models for log drag, dynamic vehicle weight distributions and soil-vehicle traction. For the case study, a theoretical data set for log skidding operations is used in investigating the effect of the factors influencing the results of mechanical analysis or the productivity of skidding operations. The analyses include 1) the effect of log choking methods on tangential log-skidding force, 2) the effects of the change in travel speed and log load on the required input power to the wheels and 3) the log skidding performance of a two-wheel drive compared with that of a four-wheel drive.

  • PDF

Log Analysis System Design using RTMA

  • Park, Hee-Chang;Myung, Ho-Min
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2004.04a
    • /
    • pp.225-236
    • /
    • 2004
  • Every web server comprises a repository of all actions and events that occur on the server. Server logs can be used to quantify user traffic. Intelligent analysis of this data provides a statistical baseline that can be used to determine server load, failed requests and other events that throw light on site usage patterns. This information provides valuable leads on marketing and site management activities. In this paper, we propose a method of design for log analysis system using RTMA(realtime monitoring and analysis) technique.

  • PDF

A Study on Improvement of Personal Information Protection Control Log Quality: A Case of the Health and Welfare Division (개인정보통합관제 로그품질 분석 및 개선에 관한 연구: 보건복지 분야 사례를 중심으로)

  • Lee, Yari;Hong, Kyong Pyo;Kim, Jung Sook
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.1
    • /
    • pp.42-51
    • /
    • 2015
  • In this paper, we analyze the quality status of Health and Welfare division's standardized log and asses the characteristics of the institutions' logs analysis to establish the criteria to minimize hazards and control the quality of log's institutional details to limit extraction. As a result, extraction condition's proposed development direction to adequately assess and control health and welfare abuses privacy control target log. This improvement over the status and quality of information shared with relation to institutional work of the log quality characteristics is made possible. In addition, quality control and inspection standards were prepared in accordance with the institutional log characteristics. Future research will include performing continuous analysis and improvement activities on the quality of logs with integrated control of sharing personal information and distributing information about logs' quality to proactively target organ. Therefore, we expect that correcting proactive personal information misuse and leakage is possible to achieve.

Compositional data analysis by the square-root transformation: Application to NBA USG% data

  • Jeseok Lee;Byungwon Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.3
    • /
    • pp.349-363
    • /
    • 2024
  • Compositional data refers to data where the sum of the values of the components is a constant, hence the sample space is defined as a simplex making it impossible to apply statistical methods developed in the usual Euclidean vector space. A natural approach to overcome this restriction is to consider an appropriate transformation which moves the sample space onto the Euclidean space, and log-ratio typed transformations, such as the additive log-ratio (ALR), the centered log-ratio (CLR) and the isometric log-ratio (ILR) transformations, have been mostly conducted. However, in scenarios with sparsity, where certain components take on exact zero values, these log-ratio type transformations may not be effective. In this work, we mainly suggest an alternative transformation, that is the square-root transformation which moves the original sample space onto the directional space. We compare the square-root transformation with the log-ratio typed transformation by the simulation study and the real data example. In the real data example, we applied both types of transformations to the USG% data obtained from NBA, and used a density based clustering method, DBSCAN (density-based spatial clustering of applications with noise), to show the result.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Comparative Analysis of regional and at-site analysis for the design rainfall by Log-Pearson Type III and GEV Distribution (Log-Pearson Type III 및 GEV분포모형에 의한 강우의 지점 및 지역빈도 비교분석)

  • Ryoo, Kyong-Sik;Lee, Soon-Hyuk
    • Proceedings of the Korean Society of Agricultural Engineers Conference
    • /
    • 2003.10a
    • /
    • pp.443-446
    • /
    • 2003
  • This study was conducted to draw design rainfall for the regional design rainfall derived by the optimal distribution and method of frequency analysis. The design rainfalls were calculated by the regional and at-site analysis for Log-Pearson type III and GEV distributions and were compared with Relative efficiency(RE) which is ratio of Relative root-mean-square error(RRMSE) by the regional and at-site analysis for Log-Pearson type III and GEV distributions. Consequently, optimal design rainfalls following the regions and consecutive durations were derived by the regional frequency analysis for GEV distribution and design rainfall maps were drawn by GIS techniques.

  • PDF

Log-based petrophysical analysis of Khatatba Formation in Shoushan Basin, North Western Desert, Egypt

  • Osli, Liyana Nadiah;Yakub, Nur Yusrina;Shalaby, Mohamed Ragab;Islam, Md. Aminul
    • Geosciences Journal
    • /
    • v.22 no.6
    • /
    • pp.1015-1026
    • /
    • 2018
  • This paper aims to investigate the good reservoir quality and hydrocarbon potentiality of the Khatatba Formation, Qasr Field in the Shoushan Basin of the North Western Desert, Egypt by combining results from log-based petrophysical analysis, petrographic description and images from scanning electron microscope (SEM). Promising reservoir units are initially identified and evaluated through well log analysis of three wells in the field of study. Petrophysical results are then compared with petrographic and SEM images from rock samples to identify features that characterize the reservoir quality. Well log results show that Khatatba Formation in the study area has good sandstone reservoir intervals from depths ranging from 12848 ft to 13900 ft, with good effective porosity records of 13-15% and hydrocarbon saturations of greater than 83%. Petrographic analysis of these sandstone reservoir units indicate high concentrations of vacant pore spaces with good permeability that can be easily occupied by hydrocarbon. The availability of these pore spaces are attributed to pore-enhancing diagenetic features, mainly in the form of good primary porosity and dissolution. SEM images and EDX analysis confirmed the presence of hydrocarbon, therefore indicating a good hydrocarbon-storing potential for the Khatatba Formation sandstones.

UX Analysis for Mobile Devices Using MapReduce on Distributed Data Processing Platform (MapReduce 분산 데이터처리 플랫폼에 기반한 모바일 디바이스 UX 분석)

  • Kim, Sungsook;Kim, Seonggyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.589-594
    • /
    • 2013
  • As the concept of web characteristics represented by openness and mind sharing grows more and more popular, device log data generated by both users and developers have become increasingly complicated. For such reasons, a log data processing mechanism that automatically produces meaningful data set from large amount of log records have become necessary for mobile device UX(User eXperience) analysis. In this paper, we define the attributes of to-be-analyzed log data that reflect the characteristics of a mobile device and collect real log data from mobile device users. Along with the MapReduce programming paradigm in Hadoop platform, we have performed a mobile device User eXperience analysis in a distributed processing environment using the collected real log data. We have then demonstrated the effectiveness of the proposed analysis mechanism by applying the various combinations of Map and Reduce steps to produce a simple data schema from the large amount of complex log records.

ILVA: Integrated audit-log analysis tool and its application. (시스템 보안 강화를 위한 로그 분석 도구 ILVA와 실제 적용 사례)

  • 차성덕
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.3
    • /
    • pp.13-26
    • /
    • 1999
  • Widespread use of Internet despite numerous positive aspects resulted in increased number of system intrusions and the need for enhanced security mechanisms is urgent. Systematic collection and analysis of log data are essential in intrusion investigation. Unfortunately existing logs are stored in diverse and incompatible format thus making an automated intrusion investigation practically impossible. We examined the types of log data essential in intrusion investigation and implemented a tool to enable systematic collection and efficient analysis of voluminous log data. Our tool based on RBDMS and SQL provides graphical and user-friendly interface. We describe our experience of using the tool in actual intrusion investigation and explain how our tool can be further enhanced.

QSAR Modeling of Toxicant Concentrations(EC50) on the Use of Bioluminescence Intensity of CMC Immobilized Photobacterium Phosphoreum (CMC 고정화 Photobacterium phosphoreum 의 생체발광량을 이용한 독성농도(EC50)의 QSAR 모델)

  • 이용제;허문석;이우창;전억한
    • KSBB Journal
    • /
    • v.15 no.3
    • /
    • pp.299-306
    • /
    • 2000
  • Concern for the effects of toxic chemicals on the environment leads the search for better bioassay test organisms and test procedures. Photobacterium phosphoreum was used successfully as a test organism and the luminometer detection technique was an effective and simple method for determining the concentration of toxic chemicals. With EC50 a total of 14 chlorine substituted phenols benzenes and ethanes were used for the experiments. The test results showed that the toxicity to P. phosphoreum increased in the order of phenol > benzene > ethane and the toxicity also increased with the number of chlorine substitution. Quantitative structure activity relationship (QSARO) model can be used to predict EC50 to save time and endeavor. Correlation was well established with the QSAR parameters such as log P, log S and solvatochromic parameter(Vi/100 $\pi$, ${\beta}$m and am). The QSAR modeling was used with multi-regression analysis and mono-regression analysis. These analyses resulted in the following QSAR : $log EC_{50} =2.48 + 0.914 log S(n=9 R2=85.5% RE=0.378) log EC_{50}=0.35 - 4.48 Vi/100 + 2.84 \pi^* +9.46{\beta}m-4.48am (n =14 R2=98.2% RE=0.012) log EC_{50} =2.64 -1.66 log P(n=5, R2=98.8% RE=0.16) log EC_{50}=3.44 -1.09 log P(n=9 R2= 80.8% Re=0.207)$

  • PDF