• Title/Summary/Keyword: 확장지수함수

Search Result 55, Processing Time 0.02 seconds

Improved approach of calculating the same shape in graph mining (그래프 마이닝에서 그래프 동형판단연산의 향상기법)

  • No, Young-Sang;Yun, Un-Il;Kim, Myung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.251-258
    • /
    • 2009
  • Data mining is a method that extract useful knowledges from huge size of data. Recently, a focussing research part of data mining is to find interesting patterns in graph databases. More efficient methods have been proposed in graph mining. However, graph analysis methods are in NP-hard problem. Graph pattern mining based on pattern growth method is to find complete set of patterns satisfying certain property through extending graph pattern edge by edge with avoiding generation of duplicated patterns. This paper suggests an efficient approach of reducing computing time of pattern growth method through pattern growth's property that similar patterns cause similar tasks. we suggest pruning methods which reduce search space. Based on extensive performance study, we discuss the results and the future works.

The Study for ENHPP Software Reliability Growth Model based on Burr Coverage Function (Burr 커버리지 함수에 기초한 ENHPP소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.4
    • /
    • pp.33-42
    • /
    • 2007
  • Accurate predictions of software release times, and estimation of the reliability and availability of a software product require quantification of a critical element of the software testing process : test coverage. This model called Enhanced non-homogeneous poission process(ENHPP). In this paper, exponential coverage and S-shaped model was reviewed, proposes the Kappa coverage model, which maked out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE statistics and Kolmogorov distance, for the sake of efficient model, was employed. From the analysis of mission time, the result of this comparative study shows the excellent performance of Burr coverage model rather than exponential coverage and S-shaped model using NTDS data. This analysis of failure data compared with the Kappa coverage model and the existing model(using arithmetic and Laplace trend tests, bias tests) is presented.

  • PDF

Joint CDMA/PRMA의 성능향상 기법에 관한 연구

  • 국광호;이강원;박정우;강석열
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.05a
    • /
    • pp.134-134
    • /
    • 2001
  • 이동통신 망을 통한 멀티미디어 통신의 수요 급증으로, 차세대 이동통신 시스템에서는 패킷 교환에 기초한 망 구조가 사용될 것으로 예측된다. VOD(Voice Activity Detector)를 갖는 음성 단말은 데이터를 발생시키는 talk spurt(평균이 t$_1$인 지수분포를 따름)와 데이터를 발생시키지 않는 silence period(평균이 t$_2$인 지수분포를 따름)의 두가지 상태를 갖는 마코프 체인으로 모델링된다. Goodman at. al.은 음성 단말들이 talk spurt동안만 데이터를 전송하게 함으로써 더 많은 가입자들을 수용할 수 있는 PRMA(Packet Reservation Multiple Access) 기법을 제안되었다. PRMA 방식에서는 시간 축이 슬롯들로 구성되며 여러개의 슬롯들로 프레임이 형성된다. Silence period 상태에 있던 음성 단말은 talk spurt 상태가 되면 talk spurt의 첫 번째 데이터를 하나의 슬롯을 통해 전송하게 된다. 이때 단말들은 각 슬롯에서 데이터를 전송할 수 있는 확률을 나타내는 채널 접근 확률(channel access probability)에 의해 데이터를 전송하게 되며 전송에 성공하면 슬롯을 예약함으로서 다음 프레임부터는 동일한 위치의 슬롯을 통해 데이터들을 전송하게 된다. DS/CDMA(Direct Sequence/code Division Multiple Access)는 이동통신 단말의 수용 용량상의 이점, 소프트 핸드오버 능력, 보다 용이하게 셀 계획을 세울 수 있는 점 등에 의해 차세대 이동통신 망에서 채택될 예정이다. CDMA 시스템은 간섭(interference)에 의해 용량이 제한을 받게 되며, MAI(Multiple Access Interference)가 시스템의 성능에 많은 영향을 미치게 된다. Brand, et. al.은 간섭의 분산을 줄이기 위해 PRMA 개념을 DS/CDMA 환경으로 확장한 Joint CDMA/PRMA 프로토콜을 제안하였다. 이때 각 슬롯에서의 데이터 전송확률을 그 슬롯에서 예약상태에 있는 음성 단말의 수에 의존하게 하는 방식을 사용하였으며 데이터 전송확률을 나타내는 채널 접근 확률들을 시뮬레이션을 통해 유도하였다. 한편 음성 단말에게는 실시간 서비스를 제공해 주어야 하는 대신 데이터 단말에게는 실시간 서비스를 제공해 주지 않아도 되므로, 트래픽이 많을 때에는 음성 단말의 데이터 전송에 우선권을 주는 것이 바람직하다. 이를 위해서 Brand, et. al.은 채널 접근 확률을 각 슬롯의 트래픽 상태에 따라 적응적으로 산출하는 기법을 제안하였다. 본 연구에서는 Joint CDMA/PRMA의 성능이 채널 접근 함수의 효율성에 많이 의존하게 되므로 보다 효율적인 채널 접근 확률을 구하는 방법을 제안한다. 즉 채널 액세스 확률을 각 슬롯에서 예약상태에 있는 음성 단말의 수뿐만 아니라 각 슬롯에서 예약을 하려고 하는 단말의 수에 기초하여 산출하는 방법을 제안하고 이의 성능을 분석하였다. 시뮬레이션에 의해 새로 제안된 채널 허용 확률을 산출하는 방식의 성능을 비교한 결과 기존에 제안된 방법들보다 상당한 성능의 향상을 볼 수 있었다.

  • PDF

A New Robust Continuos VSCS by Saturation Function for Uncertain Nonlinear Plants (불확실 비선형 플랜트를 위한 포화 함수에 의한 새로운 강인한 연속 가변구조제어시스템)

  • Lee, Jung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.3
    • /
    • pp.30-39
    • /
    • 2011
  • In this note, a systematic design of a new robust nonlinear continuous variable structure control system(VSCS) based on the modified state dependent nonlinear form is presented for the control of uncertain affine nonlinear systems with mismatched uncertainties and matched disturbance. After an affine uncertain nonlinear system is represented in the form of state dependent nonlinear system, a systematic design of a new robust nonlinear VSCS is presented. The uncertainty of the nonlinear system function is separated into the tow parts, i.e., state dependent term and state independent term for extension of target plants. To be linear in the closed loop resultant dynamics and in order to easily satisfy the existence condition of the sliding mode, the transformed linear sliding surface is applied. A corresponding control input is proposed to satisfy the closed loop exponential stability and the existence condition of the sliding mode on the linear transformed sliding surface, which will be investigated in Theorem 1. For practical application, the discontinuity of the control input as the inherent property of the VSS is improved dramatically. Through a design example and simulation studies, the usefulness of the proposed controller is verified.

A statistical analysis of the fat mass experimental data using random coefficient model (변량계수모형을 이용한 체지방 실험자료에 관한 통계적 분석)

  • Jo, Jin-Nam
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.2
    • /
    • pp.287-296
    • /
    • 2011
  • Thirty six female students participated in the experiment of the fat mass weight loss. they kept diary for foods they ate every day, took a picture of the foods, transmitted the picture to the experimenter by the camera phone, and consulted him about fat mass loss once a week for 8 weeks period. Fat mass weight and its related factors of the students had been measured repeatedly every week during 8 weeks, The repeated measurement data were used for applying various random coefficient models. And hence optimal random coefficient model was selected. From the optimal model, the baseline, body mass index, diastolic blood pressure, total cholesterol and time of the fixed factors were very significant. The fixed quadratic time effect existed. The variance components corresponding to the subject effect, linear time effect of the random coefficients were all positive. Thus random coefficients up to the linear terms were considered as the optimal model. The treatment effect reduced the weight loss to an average of 2.1kg at the end of the period.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Numerical studies on approximate option prices (근사적 옵션 가격의 수치적 비교)

  • Yoon, Jeongyoen;Seung, Jisu;Song, Seongjoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.2
    • /
    • pp.243-257
    • /
    • 2017
  • In this paper, we compare several methods to approximate option prices: Edgeworth expansion, A-type and C-type Gram-Charlier expansions, a method using normal inverse gaussian (NIG) distribution, and an asymptotic method using nonlinear regression. We used two different types of approximation. The first (called the RNM method) approximates the risk neutral probability density function of the log return of the underlying asset and computes the option price. The second (called the OPTIM method) finds the approximate option pricing formula and then estimates parameters to compute the option price. For simulation experiments, we generated underlying asset data from the Heston model and NIG model, a well-known stochastic volatility model and a well-known Levy model, respectively. We also applied the above approximating methods to the KOSPI200 call option price as a real data application. We then found that the OPTIM method shows better performance on average than the RNM method. Among the OPTIM, A-type Gram-Charlier expansion and the asymptotic method that uses nonlinear regression showed relatively better performance; in addition, among RNM, the method of using NIG distribution was relatively better than others.

Data Volume based Trust Metric for Blockchain Networks (블록체인 망을 위한 데이터 볼륨 기반 신뢰 메트릭)

  • Jeon, Seung Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.65-70
    • /
    • 2020
  • With the appearance of Bitcoin that builds peer-to-peer networks for transaction of digital content and issuance of cryptocurrency, lots of blockchain networks have been developed to improve transaction performance. Recently, Joseph Lubin discussed Decentralization Transaction per Second (DTPS) against alleviating the value of biased TPS. However, this Lubin's trust model did not enough consider a security issue in scalability trilemma. Accordingly, we proposed a trust metric based on blockchain size, stale block rate, and average block size, using a sigmoid function and convex optimization. Via numerical analysis, we presented the optimal blockchain size of popular blockchain networks and then compared the proposed trust metric with the Lubin's trust model. Besides, Bitcoin based blockchain networks such as Litecoin were superior to Ethereum for trust satisfaction and data volume.

Switch-Level Binary Decision Diagram(SLBDD) for Circuit Design Verification) (회로 설계 검증을 위한 스위치-레벨 이진 결정 다이어그램)

  • 김경기;이동은;김주호
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.5
    • /
    • pp.1-12
    • /
    • 1999
  • A new algorithm of constructing binary decision diagram(BDD) for design verification of switch-level circuits is proposed in this paper. In the switch-level circuit, functions are characterized by serial and parallel connections of switches and the final logic values may have high-impedance and unstable states in addition to the logic values of 0 and 1. We extend the BDD to represent functions of switch-level circuits as acyclic graphs so called switch-level binary decision diagram (SLBDD). The function representation of the graph is in the worst case, exponential to the number of inputs. Thus, the ordering of decision variables plays a major role in graph sizes. Under the existence of pass-transistors and domino-logic of precharging circuitry, we also propose an input ordering algorithm for the efficiency in graph sizes. We conducted several experiments on various benchmark circuits and the results show that our algorithm is efficient enough to apply to functional simulation, power estimation, and fault-simulation of switch-level design.

  • PDF

Persistent Photoconductivity in Hydrogenated Amorphous Carbon Thin Films (수소화된 비정질 탄소 박막에서의 지속광전기전도도)

  • Kang, Sung Soo;Lee, Won Jin;Sung, Duck Yong
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.1 no.2
    • /
    • pp.49-55
    • /
    • 1996
  • Hydrogenated amorphous carbon(a-C:H) films were fabricated by the low-frequency (60Hz) glow discharge of the mixture of methane and hydrogen, and their electrical properties were investigated. We observed that a-C:H films show the persistent photoconductivity(PPC) by illumination of heat-filtered while light for a few seconds. The PPC was about 10 times larger than the annealed dark conductivity. The samples clearly showed metastable characteristics. With increasing illumination times from 1 to 100 min, the annealing activation energy of the PPC was about 0.39eV. The annealing temperature at which the PPC disappeared increasing from $100^{\circ}C$ to $130^{\circ}C$. Illumination longer than 80 min leads to the formation of ${\pi}$ defects and to the decrease of PPC. From these results, we tentatively propose that the states in the ${\pi}$ band act as deep trap centers generating the metastabilities.

  • PDF