• Title/Summary/Keyword: Bayesian 분석

Search Result 693, Processing Time 0.025 seconds

A Data-driven Classifier for Motion Detection of Soldiers on the Battlefield using Recurrent Architectures and Hyperparameter Optimization (순환 아키텍쳐 및 하이퍼파라미터 최적화를 이용한 데이터 기반 군사 동작 판별 알고리즘)

  • Joonho Kim;Geonju Chae;Jaemin Park;Kyeong-Won Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.107-119
    • /
    • 2023
  • The technology that recognizes a soldier's motion and movement status has recently attracted large attention as a combination of wearable technology and artificial intelligence, which is expected to upend the paradigm of troop management. The accuracy of state determination should be maintained at a high-end level to make sure of the expected vital functions both in a training situation; an evaluation and solution provision for each individual's motion, and in a combat situation; overall enhancement in managing troops. However, when input data is given as a timer series or sequence, existing feedforward networks would show overt limitations in maximizing classification performance. Since human behavior data (3-axis accelerations and 3-axis angular velocities) handled for military motion recognition requires the process of analyzing its time-dependent characteristics, this study proposes a high-performance data-driven classifier which utilizes the long-short term memory to identify the order dependence of acquired data, learning to classify eight representative military operations (Sitting, Standing, Walking, Running, Ascending, Descending, Low Crawl, and High Crawl). Since the accuracy is highly dependent on a network's learning conditions and variables, manual adjustment may neither be cost-effective nor guarantee optimal results during learning. Therefore, in this study, we optimized hyperparameters using Bayesian optimization for maximized generalization performance. As a result, the final architecture could reduce the error rate by 62.56% compared to the existing network with a similar number of learnable parameters, with the final accuracy of 98.39% for various military operations.

Compressed Demographic Transition and Economic Growth in the Latecomer

  • Inyong Shin;Hyunho Kim
    • Analyses & Alternatives
    • /
    • v.7 no.2
    • /
    • pp.35-77
    • /
    • 2023
  • This study aims to solve the entangled loop between demographic transition (DT) and economic growth by analyzing cross-country data. We undertake a national-level group analysis to verify the compressed transition of demographic variables over time. Assuming that the LA (latecomer advantage) on DT over time exists, we verify that the DT of the latecomer is compressed by providing a formal proof of LA on DT over income. As a DT has the double-kinked functions of income, we check them in multiple aspects: early maturation, leftward threshold, and steeper descent under a contour map and econometric methods. We find that the developing countries (the latecomer) have speedy DT (CDT, compressed DT) as well as speedy income such that DT of the latecomers starts at lower levels of income, lasts for a shorter period, and finishes at the earlier stage of economic development compared to that of developed countries (the early mover). To check the balance of DT, we classify countries into four groups of DT---balanced, slow, unilateral, and rapid transition countries. We identify that the main causes of rapid transition are due to the strong family planning programs of the government. Finally, we check the effect of latecomer's CDT on economic growth inversely: we undertake the simulation of the CDT effect on economic growth and the aging process for the latecomer. A worrying result is that the CDT of the latecomer shows a sharp upturn of the working-age population, followed by a sharp downturn in a short period. Compared to early-mover countries, the latecomer countries cannot buy more time to accommodate the workable population for the period of demographic bonus and prepare their aging societies for demographic onus. Thus, we conclude that CDT is not necessarily advantageous to developing countries. These outcomes of the latecomer's CDT can be re-interpreted as follows. Developing countries need power sources to pump up economic development, such as the following production factors: labor, physical and financial capital, and economic systems. As for labor, the properties of early maturation and leftward thresholds on DTs of the latecomer mean that demographic movement occurs at an unusually early stage of economic development; this is similar to a plane that leaks fuel before or just before take-off, with the result that it no longer flies higher or farther. What is worse, the property of steeper descent represents the falling speed of a plane so that it cannot be sustained at higher levels, and then plummets to all-time lows.

Forecasting Korean CPI Inflation (우리나라 소비자물가상승률 예측)

  • Kang, Kyu Ho;Kim, Jungsung;Shin, Serim
    • Economic Analysis
    • /
    • v.27 no.4
    • /
    • pp.1-42
    • /
    • 2021
  • The outlook for Korea's consumer price inflation rate has a profound impact not only on the Bank of Korea's operation of the inflation target system but also on the overall economy, including the bond market and private consumption and investment. This study presents the prediction results of consumer price inflation in Korea for the next three years. To this end, first, model selection is performed based on the out-of-sample predictive power of autoregressive distributed lag (ADL) models, AR models, small-scale vector autoregressive (VAR) models, and large-scale VAR models. Since there are many potential predictors of inflation, a Bayesian variable selection technique was introduced for 12 macro variables, and a precise tuning process was performed to improve predictive power. In the case of the VAR model, the Minnesota prior distribution was applied to solve the dimensional curse problem. Looking at the results of long-term and short-term out-of-sample predictions for the last five years, the ADL model was generally superior to other competing models in both point and distribution prediction. As a result of forecasting through the combination of predictions from the above models, the inflation rate is expected to maintain the current level of around 2% until the second half of 2022, and is expected to drop to around 1% from the first half of 2023.

Managing the Reverse Extrapolation Model of Radar Threats Based Upon an Incremental Machine Learning Technique (점진적 기계학습 기반의 레이더 위협체 역추정 모델 생성 및 갱신)

  • Kim, Chulpyo;Noh, Sanguk
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.4
    • /
    • pp.29-39
    • /
    • 2017
  • Various electronic warfare situations drive the need to develop an integrated electronic warfare simulator that can perform electronic warfare modeling and simulation on radar threats. In this paper, we analyze the components of a simulation system to reversely model the radar threats that emit electromagnetic signals based on the parameters of the electronic information, and propose a method to gradually maintain the reverse extrapolation model of RF threats. In the experiment, we will evaluate the effectiveness of the incremental model update and also assess the integration method of reverse extrapolation models. The individual model of RF threats are constructed by using decision tree, naive Bayesian classifier, artificial neural network, and clustering algorithms through Euclidean distance and cosine similarity measurement, respectively. Experimental results show that the accuracy of reverse extrapolation models improves, while the size of the threat sample increases. In addition, we use voting, weighted voting, and the Dempster-Shafer algorithm to integrate the results of the five different models of RF threats. As a result, the final decision of reverse extrapolation through the Dempster-Shafer algorithm shows the best performance in its accuracy.

Estimation of Environmental Effect and Genetic Parameter on Reproduction Traits for On-farm Test Records (농장검정돈의 번식형질에 미치는 환경효과 및 유전모수의 추정)

  • Jung, D.J.;Kim, B.W.;Roh, S.H.;Kim, H.S.;Moon, W.K.;Kim, H.Y.;Jang, H.G.;Choi, L.S.;Jeon, J.T.;Lee, J.G.
    • Journal of Animal Science and Technology
    • /
    • v.50 no.1
    • /
    • pp.33-44
    • /
    • 2008
  • The purpose of this study was to estimate the genetic parameters and trend of Landrace and Yorkshire pigs, which were raised on private farms from 1999 to 2005 and tested for their reproductive performance by the Korea Animal Improvement Association. Prior to analysis, records without pedigree or having value with larger than±3×standard deviation for the Total number of born were excluded. The effects of breed and environmental factors were estimated with least square method(Harvey, 1979), and estimation of breeding values and genetic parameters were performed on the data of 1’st litter only with GIBBSF90(Misztal, 2001) which was programmed according to Gibbs Sampling method based on Bayesian Inference by Gianola and Fernando(1986), Jensen(1994) and others. Gibbs sampling was performed 50,000 times for each parameter, and the first 5000 samples were regarded as those in burn-in period and thus, excluded for post hoc analysis. Total number of born and total number of accident were statistically significant(p<0.01) for the breed, farrowing year, farrowing season and parity effects, and the number born alive at birth was statistically significantp<(0.01) for the breed, farrowing year, farrowing season and parity effects. No particular trend was observed in the genetic and phenotypic improvement of the total number of born and number born alive at birth before 2001, when the piglet registration system started, but the tendencies of increasing for the total number of born and number born alive and decreasing for the total number of accident were observed since 2001. Somewhat higher heritability estimates of our study seems to be attributed to the situations that first parity records with poor farrowing performances were used in the analyses and it was impossible to obtain accurate reproductive performance due to the absence of criteria for record keeping at the level of individual farms.

The Effect of Rain on Traffic Flows in Urban Freeway Basic Segments (기상조건에 따른 도시고속도로 교통류변화 분석)

  • 최정순;손봉수;최재성
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.1
    • /
    • pp.29-39
    • /
    • 1999
  • An earlier study of the effect of rain found that the capacity of freeway systems was reduced, but did not address the effects of rain on the nature of traffic flows. Indeed, the substantial variation due to the intensity of adverse weather conditions is entirely rational so that its effects must be considered in freeway facility design. However, all of the data in Highway Capacity Manual(HCM) have come from ideal conditions. The primary objective of this study is to investigate the effect of rain on urban freeway traffic flows in Seoul. To do so, the relations between three key traffic variables(flow rates, speed, occupancy), their threshold values between congested and uncontested traffic flow regimes, and speed distribution were investigated. The traffic data from Olympic Expressway in Seoul were obtained from Imagine Detection System (Autoscope) with 30 seconds and 1 minute time periods. The slope of the regression line relating flow to occupancy in the uncongested regime decreases when it is raining. In essence, this result indicates that the average service flow rate (it may be interpreted as a capacity of freeway) is reduced as weather conditions deteriorate. The reduction is in the range between 10 and 20%, which agrees with the range proposed by 1994 US HCM. It is noteworthy that the service flow rates of inner lanes are relatively higher than those of other lanes. The average speed is also reduced in rainy day, but the flow-speed relationship and the threshold values of speed and occupancy (these are called critical speed and critical occupancy) are not very sensitive to the weather conditions.

  • PDF

Building battery deterioration prediction model using real field data (머신러닝 기법을 이용한 납축전지 열화 예측 모델 개발)

  • Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.243-264
    • /
    • 2018
  • Although the worldwide battery market is recently spurring the development of lithium secondary battery, lead acid batteries (rechargeable batteries) which have good-performance and can be reused are consumed in a wide range of industry fields. However, lead-acid batteries have a serious problem in that deterioration of a battery makes progress quickly in the presence of that degradation of only one cell among several cells which is packed in a battery begins. To overcome this problem, previous researches have attempted to identify the mechanism of deterioration of a battery in many ways. However, most of previous researches have used data obtained in a laboratory to analyze the mechanism of deterioration of a battery but not used data obtained in a real world. The usage of real data can increase the feasibility and the applicability of the findings of a research. Therefore, this study aims to develop a model which predicts the battery deterioration using data obtained in real world. To this end, we collected data which presents change of battery state by attaching sensors enabling to monitor the battery condition in real time to dozens of golf carts operated in the real golf field. As a result, total 16,883 samples were obtained. And then, we developed a model which predicts a precursor phenomenon representing deterioration of a battery by analyzing the data collected from the sensors using machine learning techniques. As initial independent variables, we used 1) inbound time of a cart, 2) outbound time of a cart, 3) duration(from outbound time to charge time), 4) charge amount, 5) used amount, 6) charge efficiency, 7) lowest temperature of battery cell 1 to 6, 8) lowest voltage of battery cell 1 to 6, 9) highest voltage of battery cell 1 to 6, 10) voltage of battery cell 1 to 6 at the beginning of operation, 11) voltage of battery cell 1 to 6 at the end of charge, 12) used amount of battery cell 1 to 6 during operation, 13) used amount of battery during operation(Max-Min), 14) duration of battery use, and 15) highest current during operation. Since the values of the independent variables, lowest temperature of battery cell 1 to 6, lowest voltage of battery cell 1 to 6, highest voltage of battery cell 1 to 6, voltage of battery cell 1 to 6 at the beginning of operation, voltage of battery cell 1 to 6 at the end of charge, and used amount of battery cell 1 to 6 during operation are similar to that of each battery cell, we conducted principal component analysis using verimax orthogonal rotation in order to mitigate the multiple collinearity problem. According to the results, we made new variables by averaging the values of independent variables clustered together, and used them as final independent variables instead of origin variables, thereby reducing the dimension. We used decision tree, logistic regression, Bayesian network as algorithms for building prediction models. And also, we built prediction models using the bagging of each of them, the boosting of each of them, and RandomForest. Experimental results show that the prediction model using the bagging of decision tree yields the best accuracy of 89.3923%. This study has some limitations in that the additional variables which affect the deterioration of battery such as weather (temperature, humidity) and driving habits, did not considered, therefore, we would like to consider the them in the future research. However, the battery deterioration prediction model proposed in the present study is expected to enable effective and efficient management of battery used in the real filed by dramatically and to reduce the cost caused by not detecting battery deterioration accordingly.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

The Macroeconomic Impacts of Korean Elections and Their Future Consequences (선거(選擧)의 거시경제적(巨視經濟的) 충격(衝擊)과 파급효과(波及效果))

  • Shim, Sang-dal;Lee, Hang-yong
    • KDI Journal of Economic Policy
    • /
    • v.14 no.1
    • /
    • pp.147-165
    • /
    • 1992
  • This paper analyzes the macroeconomic effects of elections on the Korean economy and their future ramifications. It measures the shocks to the Korean economy caused by elections by taking the average of sample forecast errors from four major elections held in the 1980s. The seven variables' Bayesian Vector Autoregression Model which includes the Monetary Base, Industrial Production, Consumption, Consumer Price, Exports, and Investment is based on the quarterly time series data starting from 1970 and is updated every quarter before forecasts are made for the next quarter. Because of this updating of coefficients, which reflects in part the rapid structural changes of the Korean economy, this study can capture the shock effect of elections, which is not possible when using election dummies with a fixed coefficient model. In past elections, especially the elections held in the 1980s, $M_2$ did not show any particular movement, but the currency and base money increased during the quarter of the election was held and the increment was partly recalled in the next quarter. The liquidity of interest rates as measured by corporate bond yields fell during the quarter the election and then rose in the following quarter, which is somewhat contrary to the general concern that interest rates will increase during election periods. Manufacturing employment fell in the quarter of the election because workers turned into campaigners. This decline in employment combined with voting holiday produce a sizeable decline in industrial production during the quarter in which elections are held, but production catches up in the next quarter and sometimes more than offsets the disruption caused during the election quarter. The major shocks to price occur in the previous quarter, reflecting the expectational effect and the relaxation of government price control before the election when we simulate the impulse responses of the VAR model, imposing the same shocks that was measured in the past elections for each election to be held in 1992 and assuming that the elections in 1992 will affect the economy in the same manner as in the 1980s elections, 1992 is expected to see a sizeable increase in monetary base due to election and prices increase pressure will be amplified substantially. On the other hand, the consumption increase due to election is expected to be relatively small and the production will not decrease. Despite increased liquidity, a large portion of liquidity in circulation being used as election funds will distort the flow of funds and aggravate the fund shortage causing investments in plant and equipment and construction activities to stagnate. These effects will be greatly amplified if elections for the head of local government are going to be held this year. If mayoral and gubernatorial elections are held after National Assembly elections, their effect on prices and investment will be approximately double what they normally will have been have only congressional and presidential elections been held. Even when mayoral and gubernatorial elections are held at the same time as congressional elections, the elections of local government heads are shown to add substantial effects to the economy for the year. The above results are based on the assumption that this year's elections will shock the economy in the same manner as in past elections. However, elections in consecutive quarters do not give the economy a chance to pause and recuperate from past elections. This year's elections may have greater effects on prices and production than shown in the model's simulations because campaigners' return to industry may be delayed. Therefore, we may not see a rapid recall of money after elections. In view of the surge in the monetary base and price escalation in the periods before and after elections, economic management in 1992 should place its first priority on controlling the monetary aggregate, in particular, stabilizing the growth of the monetary base.

  • PDF