• Title/Summary/Keyword: 변동 로그 정보

Search Result 12, Processing Time 0.031 seconds

An Auxiliary Log Area for In-Page Logging Scheme (In-Page 로깅 기법을 위한 보조 로그 영역)

  • Van, Jae-Kwang;Jin, Rize;Kim, Sungsoo;Chung, Tae-Sun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.729-731
    • /
    • 2014
  • 플래시 메모리에서 B-tree 데이터를 저장하고 관리[4, 5]할 때 빈번한 수정과 구조변동으로 인해 발생하는 블록에 대한 쓰기와 지우기 연산의 비용으로 인해 플래시 메모리의 사용 수명을 단축시키는 문제를 해결하기 위해 현재 많이 쓰이고 있는 로그 저장방식을 검토하고 이를 효율적으로 B-tree에 저장하고 관리하도록 동적 블록 그룹핑과 순환 순서 기반의 저장 알고리즘으로 제안 된 GRR (Ground Round Robin) 기법을 보조 로그 블록을 할당하여 머지횟수를 줄일 수 있는 알고리즘을 제안한다.

Dynamic Selection Algorithms for Replicated Multimedia Servers by Analyzing their Web Logs (웹로그를 이용한 부본 멀티미디어 서버의 동적 선택 알고리즘)

  • 이경희;한정혜
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.745-747
    • /
    • 2001
  • 인터넷 망을 통한 멀티미디어 컨텐츠 서비스는 다른 종류의 서비스와 달리 제 사간에 연속적으로 재생되어야 의미를 갖는 데이터들로 이루어져있으며, 이러한 속성을 얼마나 충족시키느냐에 따라 QoS가 결정된다. 좋은 서비스를 제공하기 위하여 원래서브의 부본서버를 여러 개 두어 서비스 요청을 분산시키는 방법을 많이 사용하고 있다. 본 연구에서는 클라이언트의 요청에 능동적으로 그리고 효과적으로 서비스하도록 웹로그 문서전송 서비스양의 분포에 따른 사전정보를 가지고 각 부본서버의 부하량을 체크하고, 이후에 발생하는 클라이어트의 요청을 분산시킬 수 있는 동적 알고리즘을 재안한다. 본 동적선택 알고리즘은 QoS가 중요한 대량의 멀티미디어 컨텐츠를 전송함에 있어서 HTTP 반응시간과 문서크기의 변동에 따른 근접척도 공정능력지수를 이용하여 클라이어트 요청을 확률분산시키는 것이다.

  • PDF

Long-Range Dependence and 1/f Noise in a Wide Area Network Traffic (광역 네트워크 트래픽의 장거리 상관관계와 1/f 노이즈)

  • Lee, Chang-Yong
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.1
    • /
    • pp.27-34
    • /
    • 2010
  • In this paper, we examine a long-range dependence in an active measurement of a network traffic which has been a well known characteristic from analyses of a passive network traffic measurement. To this end, we utilize RTT(Round Trip Time), which is a typical active measurement measured by PingER project, and perform a relevant analysis to a time series of both RTT and its volatilities. The RTT time series exhibits a long-range dependence or a 1/f noise. The volatilities, defined as a higher-order variation, follow a log-normal distribution. Furthermore, volatilities show a long-range dependence in relatively short time intervals, and a long-range dependence and/or 1/f noise in long time intervals. From this study, we find that the long-range dependence is a characteristic of not only a passive traffic measurement but also an active measurement of network traffic such as RTT. From these findings, we can infer that the long-range dependence is a characteristic of network traffic independent of a type of measurements. In particular, an active measurement exhibits a 1/f noise which cannot be usually found in a passive measurement.

Game-bot detection based on Clustering of asset-varied location coordinates (자산변동 좌표 클러스터링 기반 게임봇 탐지)

  • Song, Hyun Min;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.5
    • /
    • pp.1131-1141
    • /
    • 2015
  • In this paper, we proposed a new approach of machine learning based method for detecting game-bots from normal players in MMORPG by inspecting the player's action log data especially in-game money increasing/decreasing event log data. DBSCAN (Density Based Spatial Clustering of Applications with Noise), an one of density based clustering algorithms, is used to extract the attributes of spatial characteristics of each players such as a number of clusters, a ratio of core points, member points and noise points. Most of all, even game-bot developers know principles of this detection system, they cannot avoid the system because moving a wide area to hunt the monster is very inefficient and unproductive. As the result, game-bots show definite differences from normal players in spatial characteristics such as very low ratio, less than 5%, of noise points while normal player's ratio of noise points is high. In experiments on real action log data of MMORPG, our game-bot detection system shows a good performance with high game-bot detection accuracy.

Personal Flight Schedule System (개인비행 일정 관리 시스템)

  • Kim, Seong-Min;Song, Yeong-Chang;Yu, Yu-Jeong;Lee, Yu-Jin;Joo, Jong-Wha J.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.330-332
    • /
    • 2022
  • 최근, 팬데믹 상황이 나아지면서 줄었던 해외출장이 늘어나고 있다. 해외출장 시 주 교통수단으로 사용하는 항공기는 기상상황에 따라 변동이 잦고 이런 변경된 일정을 확인하기 위해서는 항공사 마다 로그인해서 일정을 확인하거나 공항 홈페이지에서 직접 항공편명을 검색해야 한다. 그렇기 때문에, 일정을 열람하기가 번거로우며 제 3 자와 일정을 공유하고자 할 때에는 제 3 자가 공항 홈페이지에서 항공번호를 직접 입력하는 방법 밖에 없으므로 더욱 불편하다. 본 연구에서는 이런 불편사항을 개선하기 위하여 하나의 웹사이트에서 자신의 일정을 등록하고 열람하며 제 3 자와 쉽게 공유할 수 있는 방법을 고안하였다.

A Key Recovery Mechanism for Reliable Group Key Management (신뢰성 있는 그룹키 관리를 위한 키 복구 메커니즘)

  • 조태남;김상희;이상호;채기준;박원주;나재훈
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.6
    • /
    • pp.705-713
    • /
    • 2003
  • For group security to protect group data or to charge, group keys should be updated via key update messages when the membership of group changes. If we lose these messages, it is not possible to decrypt the group data and so this is why the recovery of lost keys is very significant. Any message lost during a certain member is logged off can not be recovered in real-time. Saving all messages and resending them by KDC (Key Distribution Center) not only requests large saving spaces, but also causes to transmit and decrypt unnecessary keys. This paper analyzes the problem of the loss of key update messages along with other problems that may arise during member login procedure, and also gives an efficient method for recovering group keys and auxiliary keys. This method provides that group keys and auxiliary keys can be recovered and sent effectively using information stored in key-tree. The group key generation method presented in this paper is simple and enable us to recover any group key without storing. It also eliminates the transmissions and decryptions of useless auxiliary keys.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Conflict Resolution for Data Synchronization in Multiple Devices (다중 디바이스에서 데이터 동기화를 위한 충돌 해결)

  • Oh Seman;La Hwanggyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.2
    • /
    • pp.279-286
    • /
    • 2005
  • As the mobile environment has been generalized, data synchronization with mobile devices or mobile device and PC/server is required. To deal with the problem, the consortium was established by the companies, such as Motorola, Ericsson, and Nokia, and released SyncML(Synchronization Markup Language) as a standard of industrial area for interoperating with data synchronization and various transmission protocol. But, in synchronization process, when more than two clients requested data synchronization, data conflict can be happened. This paper studies the various conflict reasons that can happen in data synchronization processes and groups them systematically Through the analyzed information, we compose the Change Log Information(CLI) that can keep track of the chased information about synchronization. And we suggest an operation policy using CLI. Finally, we design an algorithm and adapt the policy as a method for the safety and consistency of data.

  • PDF

Efficient Inverter Type Compressor System using the Distribution of the Air Flow Rate (공기 변화량 분포를 이용한 효율적인 인버터타입 압축기 시스템)

  • Shim, JaeRyong;Kim, Yong-Chul;Noh, Young-Bin;Jung, Hoe-kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2396-2402
    • /
    • 2015
  • Air compressor, as an essential equipment used in the factory and plant operations, accounts for around 30% of the total electricity consumption in U.S.A, thereby being proposed advanced technologies to reduce electricity consumption. When the fluctuation of the compressed airflow rate is small, the system stability is increased followed by the reduction of the electricity consumption which results in the efficient design of the energy system. In the statistical analysis, the normal distribution, log normal distribution, gamma distribution or the like are generally used to identify system characteristics. However a single distribution may not fit well the data with long tail, representing sudden air flow rate especially in extremes. In this paper, authors decouple the compressed airflow rate into two parts to present a mixture of distribution function and suggest a method to reduce the electricity consumption. This reduction stems from the fact that a general pareto distribution estimates more accurate quantile value than a gaussian distribution when an airflow rate exceeds over a large number.

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.