• 제목/요약/키워드: 조합 최적화 문제

Search Result 186, Processing Time 0.02 seconds

Evaluation of extreme rainfall estimation obtained from NSRP model based on the objective function with statistical third moment (통계적 3차 모멘트 기반의 목적함수를 이용한 NSRP 모형의 극치강우 재현능력 평가)

  • Cho, Hemie;Kim, Yong-Tak;Yu, Jae-Ung;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.7
    • /
    • pp.545-556
    • /
    • 2022
  • It is recommended to use long-term hydrometeorological data for more than the service life of the hydraulic structures and water resource planning. For the purpose of expanding rainfall data, stochastic simulation models, such as Modified Bartlett-Lewis Rectangular Pulse (BLRP) and Neyman-Scott Rectangular Pulse (NSRP) models, have been widely used. The optimal parameters of the model can be estimated by repeatedly comparing the statistical moments defined through a combination of parameters of the probability distribution in the optimization context. However, parameter estimation using relatively small observed rainfall statistics corresponds to an ill-posed problem, leading to an increase in uncertainty in the parameter estimation process. In addition, as shown in previous studies, extreme values are underestimated because objective functions are typically defined by the first and second statistical moments (i.e., mean and variance). In this regard, this study estimated the parameters of the NSRP model using the objective function with the third moment and compared it with the existing approach based on the first and second moments in terms of estimation of extreme rainfall. It was found that the first and second moments did not show a significant difference depending on whether or not the skewness was considered in the objective function. However, the proposed model showed significantly improved performance in terms of estimation of design rainfalls.

Forecasting Korean CPI Inflation (우리나라 소비자물가상승률 예측)

  • Kang, Kyu Ho;Kim, Jungsung;Shin, Serim
    • Economic Analysis
    • /
    • v.27 no.4
    • /
    • pp.1-42
    • /
    • 2021
  • The outlook for Korea's consumer price inflation rate has a profound impact not only on the Bank of Korea's operation of the inflation target system but also on the overall economy, including the bond market and private consumption and investment. This study presents the prediction results of consumer price inflation in Korea for the next three years. To this end, first, model selection is performed based on the out-of-sample predictive power of autoregressive distributed lag (ADL) models, AR models, small-scale vector autoregressive (VAR) models, and large-scale VAR models. Since there are many potential predictors of inflation, a Bayesian variable selection technique was introduced for 12 macro variables, and a precise tuning process was performed to improve predictive power. In the case of the VAR model, the Minnesota prior distribution was applied to solve the dimensional curse problem. Looking at the results of long-term and short-term out-of-sample predictions for the last five years, the ADL model was generally superior to other competing models in both point and distribution prediction. As a result of forecasting through the combination of predictions from the above models, the inflation rate is expected to maintain the current level of around 2% until the second half of 2022, and is expected to drop to around 1% from the first half of 2023.

Development of Elbow Joint X-ray Examination Aid for Medical Imaging Diagnosis (의료영상 진단을 위한 팔꿉관절 X-선 검사 보조기구 개발)

  • Hyeong-Gyun Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.2
    • /
    • pp.127-133
    • /
    • 2024
  • The elbow joint is made up of three different bones. X-rays or other radiological exams are commonly used to diagnose elbow injuries or disorders caused by physical activity and external forces. Previous research on the elbow joint reported a new examination method that meets the imaging evaluation criteria in the tilt position by Z-axis elevation of the forearm. Therefore, this study aims to design an optimized instrument and develop an aid applicable to other upper extremity exams. After completing the 2D drawing and 3D modeling design, the final design divided into four parts was fabricated with a 3D printer using ABS plastic and assembled. The developed examination aid consists of a four-stage Z-axis elevation tilt angle function (0°, 5°, 10°, and 15°) and can rotate and fixate 360° in 1-degree increments. It was designed to withstand a maximum equivalent stress of 56.107 Pa and a displacement of 1.6548e-5 mm through structural analysis to address loading issues caused by cumulative frequency of use and physical utilization. In addition to X-ray exams of the elbow joint, the developed aid can be used for shoulder function tests by rotating the humerus and also be applied to MRI and CT exams as it is made of non-metallic materials. It will contribute to the accuracy and efficiency of medical imaging diagnosis through clinical applications of various devices and medical imaging exams in the future.

Development of machine learning prediction model for weight loss rate of chestnut (Castanea crenata) according to knife peeling process (밤의 칼날식 박피공정에 따른 머신 러닝 기반 중량감모율 예측 모델 개발)

  • Tae Hyong Kim;Ah-Na Kim;Ki Hyun Kwon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.236-244
    • /
    • 2024
  • A representative problem in domestic chestnut industry is the high loss of flesh due to excessive knife peeling in order to increase the peeling rate, resulting in a decrease in production efficiency. In this study, a prediction model for weight loss rate of chestnut by stage of knife peeling process was developed as undergarment study to optimize conditions of the machine. 51 control conditions of the two-stage blade peeler used in the experiment were derived and repeated three times to obtain a total of 153 data. Machine learning(ML) models including artificial neural network (ANN) and random forest (RF) were implemented to predict the weight loss rate by chestnut peel stage (after 1st peeling, 2nd peeling, and after final discharge). The performance of the models were evaluated by calculating the values of coefficient of determination (R), normalized root mean square error (nRMSE), and mean absolute error (MAE). After all peeling stages, RF model have better prediction accuracy with higher R values and low prediction error with lower nRMSE and MAE values, compared to ANN model. The final selected RF prediction model showed excellent performance with insignificant error between the experimental and predicted values. As a result, the proposed model can be useful to set optimum condition of knife peeling for the purpose of minimizing the weight loss of domestic chestnut flesh with maximizing peeling rate.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.