• Title/Summary/Keyword: 시계열

Search Result 3,237, Processing Time 0.034 seconds

The Prediction of Currency Crises through Artificial Neural Network (인공신경망을 이용한 경제 위기 예측)

  • Lee, Hyoung Yong;Park, Jung Min
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.19-43
    • /
    • 2016
  • This study examines the causes of the Asian exchange rate crisis and compares it to the European Monetary System crisis. In 1997, emerging countries in Asia experienced financial crises. Previously in 1992, currencies in the European Monetary System had undergone the same experience. This was followed by Mexico in 1994. The objective of this paper lies in the generation of useful insights from these crises. This research presents a comparison of South Korea, United Kingdom and Mexico, and then compares three different models for prediction. Previous studies of economic crisis focused largely on the manual construction of causal models using linear techniques. However, the weakness of such models stems from the prevalence of nonlinear factors in reality. This paper uses a structural equation model to analyze the causes, followed by a neural network model to circumvent the linear model's weaknesses. The models are examined in the context of predicting exchange rates In this paper, data were quarterly ones, and Consumer Price Index, Gross Domestic Product, Interest Rate, Stock Index, Current Account, Foreign Reserves were independent variables for the prediction. However, time periods of each country's data are different. Lisrel is an emerging method and as such requires a fresh approach to financial crisis prediction model design, along with the flexibility to accommodate unexpected change. This paper indicates the neural network model has the greater prediction performance in Korea, Mexico, and United Kingdom. However, in Korea, the multiple regression shows the better performance. In Mexico, the multiple regression is almost indifferent to the Lisrel. Although Lisrel doesn't show the significant performance, the refined model is expected to show the better result. The structural model in this paper should contain the psychological factor and other invisible areas in the future work. The reason of the low hit ratio is that the alternative model in this paper uses only the financial market data. Thus, we cannot consider the other important part. Korea's hit ratio is lower than that of United Kingdom. So, there must be the other construct that affects the financial market. So does Mexico. However, the United Kingdom's financial market is more influenced and explained by the financial factors than Korea and Mexico.

Grand Circulation Process of Beach Cusp and its Seasonal Variation at the Mang-Bang Beach from the Perspective of Trapped Mode Edge Waves as the Driving Mechanism of Beach Cusp Formation (맹방해안에서 관측되는 Beach Cusp의 일 년에 걸친 대순환 과정과 계절별 특성 - 여러 생성기작 중 포획모드 Edge Waves를 중심으로)

  • Cho, Yong Jun
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.5
    • /
    • pp.265-277
    • /
    • 2019
  • Using the measured data of waves and shore-line, we reviewed the grand circulation process and seasonal variation of beach cusp at the Mang-Bang beach from the perspective of trapped mode Edge waves known as the driving mechanism of beach cusp. In order to track the temporal and spatial variation trends of beach cusp, we quantify the beach cusp in terms of its wave length and amplitude detected by threshold crossing method. In doing so, we also utilize the spectral analysis method and its associated spectral mean sand wave number. From repeated period of convergence and ensuing splitting of sand waves detected from the yearly time series of spectral mean sand wave number of beach cusp, it is shown that the grand circulation process of beach cusp at Mang-Bang beach are occurring twice from 2017. 4. 26 to 2018. 4. 20. For the case of beach area, it increased by $14,142m^2$ during this period, and the shore-line advanced by 18 m at the northen and southern parts of the Mang-Bang beach whereas the shore-line advanced by 2.4 m at the central parts of Mang-Bang beach. It is also worthy of note that the beach area rapidly increased by $30,345m^2$ from 2017.11.26. to 2017.12.22. which can be attributed to the nature of coming waves. During this period, mild swells of long period were prevailing, and their angle of attack were next to zero. These characteristics of waves imply that the main transport mode of sediment would be the cross-shore. Considering the facts that self-healing capacity of natural beaches is realized via the cross-shore sediment once temporarily eroded. it can be easily deduced that the sediment carried by the boundary layer streaming toward the shore under mild swells which normally incident toward the Mang-Bang beach makes the beach area rapidly increase from 2017.11.26. to 2017.12.22.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

Studies on Changes in the Hydrography and Circulation of the Deep East Sea (Japan Sea) in a Changing Climate: Status and Prospectus (기후변화에 따른 동해 심층 해수의 물리적 특성 및 순환 변화 연구 : 현황과 전망)

  • HOJUN LEE;SUNGHYUN NAM
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.28 no.1
    • /
    • pp.1-18
    • /
    • 2023
  • The East Sea, one of the regions where the most rapid warming is occurring, is known to have important implications for the response of the ocean to future climate changes because it not only reacts sensitively to climate change but also has a much shorter turnover time (hundreds of years) than the ocean (thousands of years). However, the processes underlying changes in seawater characteristics at the sea's deep and abyssal layers, and meridional overturning circulation have recently been examined only after international cooperative observation programs for the entire sea allowed in-situ data in a necessary resolution and accuracy along with recent improvement in numerical modeling. In this review, previous studies on the physical characteristics of seawater at deeper parts of the East Sea, and meridional overturning circulation are summarized to identify any remaining issues. The seawater below a depth of several hundreds of meters in the East Sea has been identified as the Japan Sea Proper Water (East Sea Proper Water) due to its homogeneous physical properties of a water temperature below 1℃ and practical salinity values ranging from 34.0 to 34.1. However, vertically high-resolution salinity and dissolved oxygen observations since the 1990s enabled us to separate the water into at least three different water masses (central water, CW; deep water, DW; bottom water, BW). Recent studies have shown that the physical characteristics and boundaries between the three water masses are not constant over time, but have significantly varied over the last few decades in association with time-varying water formation processes, such as convection processes (deep slope convection and open-ocean deep convection) that are linked to the re-circulation of the Tsushima Warm Current, ocean-atmosphere heat and freshwater exchanges, and sea-ice formation in the northern part of the East Sea. The CW, DW, and BW were found to be transported horizontally from the Japan Basin to the Ulleung Basin, from the Ulleung Basin to the Yamato Basin, and from the Yamato Basin to the Japan Basin, respectively, rotating counterclockwise with a shallow depth on the right of its path (consistent with the bottom topographic control of fluid in a rotating Earth). This horizontal deep circulation is a part of the sea's meridional overturning circulation that has undergone changes in the path and intensity. Yet, the linkages between upper and deeper circulation and between the horizontal and meridional overturning circulation are not well understood. Through this review, the remaining issues to be addressed in the future were identified. These issues included a connection between the changing properties of CW, DW, and BW, and their horizontal and overturning circulations; the linkage of deep and abyssal circulations to the upper circulation, including upper water transport from and into the Western Pacific Ocean; and processes underlying the temporal variability in the path and intensity of CW, DW, and BW.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.