• Title/Summary/Keyword: Risk Metric

Search Result 54, Processing Time 0.02 seconds

A Study on the Framework of Cutover Decision Making on Large-scale IS Development Projects: A Core Banking Development Case of D Bank (대규모 정보시스템 개발 프로젝트의 컷오버 의사결정 프레임워크에 관한 연구: D은행 코어뱅킹 시스템 구축 사례를 중심으로)

  • Jeong, Cheon-Su;Ahn, Hyun-Chul;Jeong, Seung-Ryul
    • Information Systems Review
    • /
    • v.14 no.1
    • /
    • pp.1-19
    • /
    • 2012
  • A large-scale IS development project takes a long time, thus its project manager needs to be more careful on risk management. In particular, appropriate cutover decision making is critical in large-scale IS development projects because the opening of the large-scale IS significantly impacts the organization. Regardless of its importance, cutover decision making in conventional IS development projects has been done in a quite simple way. Conventional cutover decisions have been made by considering just whether the new IS operates or not from the system, application, and data implementation perspectives. However, this approach may lead to unsatisfactory performance or system failure in complex large-scale IS development. Under this background, we propose a new framework for cutover decision making on large-scale IS projects. To validate the applicability, we applied the framework to a core banking system development case. The case study shows that our framework is effective in proper cutover decision making.

  • PDF

Effects of Visual Information Blockage on Landing Strategy during Drop Landing (시각 정보의 차단이 드롭랜딩 시 착지 전략에 미치는 영향)

  • Koh, Young-Chul;Cho, Joon-Haeng;Moon, Gon-Sung;Lee, Hae-Dong;Lee, Sung-Cheol
    • Korean Journal of Applied Biomechanics
    • /
    • v.21 no.1
    • /
    • pp.31-38
    • /
    • 2011
  • This study aimed to determine the effects of the blockage of visual feedback on joint dynamics of the lower extremity. Fifteen healthy male subjects(age: $24.1{\pm}2.3\;yr$, height: $178.7{\pm}5.2\;cm$, weight: $73.6{\pm}6.6\;kg$) participated in this study. Each subject performed single-legged landing from a 45 cm-platform with the eyes open or closed. During the landing performance, three-dimensional kinematics of the lower extremity and ground reaction force(GRF) were recorded using a 8 infrared camera motion analysis system (Vicon MX-F20, Oxford Metric Ltd, Oxford, UK) with a force platform(ORG-6, AMTI, Watertown, MA). The results showed that at 50 ms prior to foot contact and at the time of foot contact, ankle plantar-flexion angle was smaller(p<.05) but the knee joint valgus and the hip flexion angles were greater with the eyes closed as compared to with the eyes open(p<.05). An increase in anterior GRF was observed during single-legged landing with the eyes closed as compared to with the eyes open(p<.05). Time to peak GRF in the medial, vertical and posterior directions occurred significantly earlier when the eyes were closed as compared to when the eyes were open(p<.05). Landing with the eyes closed resulted in a higher peak vertical loading rate(p<.05). In addition, the shock-absorbing power decreased at the ankle joint(p<.05) but increased at the hip joints when landing with the eyes closed(p<.05). When the eyes were closed, landing could be characterized by a less plantarflexed ankle joint and more flexed hip joint, with a faster time to peak GRF. These results imply that subjects are able to adapt the control of landing to different feedback conditions. Therefore, we suggest that training programs be introduced to reduce these injury risk factors.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.