• Title/Summary/Keyword: Optimization algorithms

Search Result 1,701, Processing Time 0.03 seconds

A Development of Automatic Lineament Extraction Algorithm from Landsat TM images for Geological Applications (지질학적 활용을 위한 Landsat TM 자료의 자동화된 선구조 추출 알고리즘의 개발)

  • 원중선;김상완;민경덕;이영훈
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.2
    • /
    • pp.175-195
    • /
    • 1998
  • Automatic lineament extraction algorithms had been developed by various researches for geological purpose using remotely sensed data. However, most of them are designed for a certain topographic model, for instance rugged mountainous region or flat basin. Most of common topographic characteristic in Korea is a mountainous region along with alluvial plain, and consequently it is difficult to apply previous algorithms directly to this area. A new algorithm of automatic lineament extraction from remotely sensed images is developed in this study specifically for geological applications. An algorithm, named as DSTA(Dynamic Segment Tracing Algorithm), is developed to produce binary image composed of linear component and non-linear component. The proposed algorithm effectively reduces the look direction bias associated with sun's azimuth angle and the noise in the low contrast region by utilizing a dynamic sub window. This algorithm can successfully accomodate lineaments in the alluvial plain as well as mountainous region. Two additional algorithms for estimating the individual lineament vector, named as ALEHHT(Automatic Lineament Extraction by Hierarchical Hough Transform) and ALEGHT(Automatic Lineament Extraction by Generalized Hough Transform) which are merging operation steps through the Hierarchical Hough transform and Generalized Hough transform respectively, are also developed to generate geological lineaments. The merging operation proposed in this study is consisted of three parameters: the angle between two lines($\delta$$\beta$), the perpendicular distance($(d_ij)$), and the distance between midpoints of lines(dn). The test result of the developed algorithm using Landsat TM image demonstrates that lineaments in alluvial plain as well as in rugged mountain is extremely well extracted. Even the lineaments parallel to sun's azimuth angle are also well detected by this approach. Further study is, however, required to accommodate the effect of quantization interval(droh) parameter in ALEGHT for optimization.

Performance Analysis and Comparison of Stream Ciphers for Secure Sensor Networks (안전한 센서 네트워크를 위한 스트림 암호의 성능 비교 분석)

  • Yun, Min;Na, Hyoung-Jun;Lee, Mun-Kyu;Park, Kun-Soo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.5
    • /
    • pp.3-16
    • /
    • 2008
  • A Wireless Sensor Network (WSN for short) is a wireless network consisting of distributed small devices which are called sensor nodes or motes. Recently, there has been an extensive research on WSN and also on its security. For secure storage and secure transmission of the sensed information, sensor nodes should be equipped with cryptographic algorithms. Moreover, these algorithms should be efficiently implemented since sensor nodes are highly resource-constrained devices. There are already some existing algorithms applicable to sensor nodes, including public key ciphers such as TinyECC and standard block ciphers such as AES. Stream ciphers, however, are still to be analyzed, since they were only recently standardized in the eSTREAM project. In this paper, we implement over the MicaZ platform nine software-based stream ciphers out of the ten in the second and final phases of the eSTREAM project, and we evaluate their performance. Especially, we apply several optimization techniques to six ciphers including SOSEMANUK, Salsa20 and Rabbit, which have survived after the final phase of the eSTREAM project. We also present the implementation results of hardware-oriented stream ciphers and AES-CFB fur reference. According to our experiment, the encryption speeds of these software-based stream ciphers are in the range of 31-406Kbps, thus most of these ciphers are fairly acceptable fur sensor nodes. In particular, the survivors, SOSEMANUK, Salsa20 and Rabbit, show the throughputs of 406Kbps, 176Kbps and 121Kbps using 70KB, 14KB and 22KB of ROM and 2811B, 799B and 755B of RAM, respectively. From the viewpoint of encryption speed, the performances of these ciphers are much better than that of the software-based AES, which shows the speed of 106Kbps.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Novel Power Bus Design Method for High-Speed Digital Boards (고속 디지털 보드를 위한 새로운 전압 버스 설계 방법)

  • Wee, Jae-Kyung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.12 s.354
    • /
    • pp.23-32
    • /
    • 2006
  • Fast and accurate power bus design (FAPUD) method for multi-layers high-speed digital boards is devised for the power supply network design tool for accurate and precise high speed board. FAPUD is constructed, based on two main algorithms of the PBEC (Path Based Equivalent Circuit) model and the network synthesis method. The PBEC model exploits simple arithmetic expressions of the lumped 1-D circuit model from the electrical parameters of a 2-D power distribution network. The circuit level design based on PBEC is carried with the proposed regional approach. The circuit level design directly calculates and determines the size of on-chip decoupling capacitors, the size and the location of off-chip decoupling capacitors, and the effective inductances of the package power bus. As a design output, a lumped circuit model and a pre-layout of the power bus including a whole decoupling capacitors are obtained after processing FAPUD. In the tuning procedure, the board re-optimization considering simultaneous switching noise (SSN) added by I/O switching can be carried out because the I/O switching effect on a power supply noise can be estimated over the operation frequency range with the lumped circuit model. Furthermore, if a design changes or needs to be tuned, FAPUD can modify design by replacing decoupling capacitors without consuming other design resources. Finally, FAPUD is accurate compared with conventional PEEC-based design tools, and its design time is 10 times faster than that of conventional PEEC-based design tools.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Minimum Cost Path for Private Network Design (개인통신망 설계를 위한 최소 비용 경로)

  • Choe, Hong-Sik;Lee, Ju-Yeong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.11
    • /
    • pp.1373-1381
    • /
    • 1999
  • 이 논문에서는 통신망 설계 응용분야의 문제를 그래프 이론 문제로써 고려해 보았다. 개별 기업체가 서로 떨어진 두 곳을 연결하고자 할 때 공용통신망의 회선을 빌려 통신망을 구축하게 되는데 많은 경우 여러 종류의 회선들이 공급됨으로 어떤 회선을 선택하느냐의 문제가 생긴다. 일반적으로 빠른 회선(low delay)은 느린 회선(high delay)에 비해 비싸다. 그러나 서비스의 질(Quality of Service)이라는 요구사항이 종종 종단지연(end-to-end delay)시간에 의해 결정되므로, 무조건 낮은 가격의 회선만을 사용할 수는 없다. 결국 개별 기업체의 통신망을 위한 통로를 공용 통신망 위에 덮어씌워(overlaying) 구축하는 것의 여부는 두 개의 상반된 인자인 가격과 속도의 조절에 달려 있다. 따라서 일반적인 최소경로 찾기의 변형이라 할 수 있는 다음의 문제가 본 논문의 관심사이다. 두 개의 지점을 연결하는데 종단지연시간의 한계를 만족하면서 최소경비를 갖는 경로에 대한 해결을 위하여, 그래프 채색(coloring) 문제와 최단경로문제를 함께 포함하는 그래프 이론의 문제로 정형화시켜 살펴본다. 배낭문제로의 변환을 통해 이 문제는 {{{{NP-complete임을 증명하였고 {{{{O($\mid$E$\mid$D_0 )시간에 최적값을 주는 의사선형 알고리즘과O($\mid$E$\mid$)시간의 근사 알고리즘을 보였다. 특별한 경우에 대한 {{{{O($\mid$V$\mid$ + $\mid$E$\mid$)시간과 {{{{O($\mid$E$\mid$^2 + $\mid$E$\mid$$\mid$V$\mid$log$\mid$V$\mid$)시간 알고리즘을 보였으며 배낭 문제의 해결책과 유사한 그리디 휴리스틱(greedy heuristic) 알고리즘이 그물 구조(mesh) 그래프 상에서 좋은 결과를 보여주고 있음을 실험을 통해 확인해 보았다.Abstract This paper considers a graph-theoretic problem motivated by a telecommunication network optimization. When a private organization wishes to connect two sites by leasing physical lines from a public telecommunications network, it is often the cases that several categories of lines are available, at different costs. Typically a faster (low delay) lines costs more than a slower (high delay) line. However, low cost lines cannot be used exclusively because the Quality of Service (QoS) requirements often impose a bound on the end-to-end delay. Therefore, overlaying a path on the public network involves two diametrically opposing factors: cost and delay. The following variation of the standard shortest path problem is thus of interest: the shortest route between the two sites that meets a given bound on the end-to-end delay. For this problem we formulate a graph-theoretical problem that has both a shortest path component as well as coloring component. Interestingly, the problem could be formulated as a knapsack problem. We have shown that the general problem is NP-complete. The optimal polynomial-time algorithms for some special cases and one heuristic algorithm for the general problem are described.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taek-Soo;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support for multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To data, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Effective Harmony Search-Based Optimization of Cost-Sensitive Boosting for Improving the Performance of Cross-Project Defect Prediction (교차 프로젝트 결함 예측 성능 향상을 위한 효과적인 하모니 검색 기반 비용 민감 부스팅 최적화)

  • Ryu, Duksan;Baik, Jongmoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.77-90
    • /
    • 2018
  • Software Defect Prediction (SDP) is a field of study that identifies defective modules. With insufficient local data, a company can exploit Cross-Project Defect Prediction (CPDP), a way to build a classifier using dataset collected from other companies. Most machine learning algorithms for SDP have used more than one parameter that significantly affects prediction performance depending on different values. The objective of this study is to propose a parameter selection technique to enhance the performance of CPDP. Using a Harmony Search algorithm (HS), our approach tunes parameters of cost-sensitive boosting, a method to tackle class imbalance causing the difficulty of prediction. According to distributional characteristics, parameter ranges and constraint rules between parameters are defined and applied to HS. The proposed approach is compared with three CPDP methods and a Within-Project Defect Prediction (WPDP) method over fifteen target projects. The experimental results indicate that the proposed model outperforms the other CPDP methods in the context of class imbalance. Unlike the previous researches showing high probability of false alarm or low probability of detection, our approach provides acceptable high PD and low PF while providing high overall performance. It also provides similar performance compared with WPDP.

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.