• Title/Summary/Keyword: optimization approach

Search Result 2,364, Processing Time 0.028 seconds

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.

A Study on the Optimization Methods of Security Risk Analysis and Management (경비위험 분석 및 관리의 최적화 방안에 관한 연구)

  • Lee, Doo-Suck
    • Korean Security Journal
    • /
    • no.10
    • /
    • pp.189-213
    • /
    • 2005
  • Risk management should be controlled systematically by effectively evaluating and suggesting countermeasures against the various risks which are followed by the change of the society and environment. These days, enterprise risk management became a new trend in the field. The first step in risk analysis is to recognize the risk factors, that is to verify the vulnerabilities of loss in the security facilities. The second step is to consider the probability of loss in assessing the risk factors. And the third step is to evaluate the criticality of loss. The security manager will determine the assessment grades and then the risk levels of each risk factor, on the basis of the result of risk analysis which includes the assessment of vulnerability, the provability of loss and the criticality. It is of great importance to put the result of risk analysis in mathematical statement for a scientific approach to risk management. Using the risk levels gained from the risk analysis, the security manager can develop a comprehensive and supplementary security plan. In planning the risk management measures to prepare against and minimize the loss, insurance is one of the best loss-prevention programs. However, insurance in and of itself is no longer able to meet the security challenges faced by major corporations. The security manager have to consider the cost-effectiveness, to suggest the productive risk management alternatives by using the security files which contains every information about the security matters. Also he/she have to reinforce the company regulations on security and safety, and to execute education repeatedly on security and risk management. Risk management makes the most efficient before-the-loss arrangement for and after-the-loss continuation of a business. So it is very much important to suggest a best cost-effective and realistic alternatives for optimizing risk management above all, and this function should by maintained and developed continuously and repeatedly.

  • PDF

Direct Reconstruction of Displaced Subdivision Mesh from Unorganized 3D Points (연결정보가 없는 3차원 점으로부터 차이분할메쉬 직접 복원)

  • Jung, Won-Ki;Kim, Chang-Heon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.6
    • /
    • pp.307-317
    • /
    • 2002
  • In this paper we propose a new mesh reconstruction scheme that produces a displaced subdivision surface directly from unorganized points. The displaced subdivision surface is a new mesh representation that defines a detailed mesh with a displacement map over a smooth domain surface, but original displaced subdivision surface algorithm needs an explicit polygonal mesh since it is not a mesh reconstruction algorithm but a mesh conversion (remeshing) algorithm. The main idea of our approach is that we sample surface detail from unorganized points without any topological information. For this, we predict a virtual triangular face from unorganized points for each sampling ray from a parameteric domain surface. Direct displaced subdivision surface reconstruction from unorganized points has much importance since the output of this algorithm has several important properties: It has compact mesh representation since most vertices can be represented by only a scalar value. Underlying structure of it is piecewise regular so it ran be easily transformed into a multiresolution mesh. Smoothness after mesh deformation is automatically preserved. We avoid time-consuming global energy optimization by employing the input data dependant mesh smoothing, so we can get a good quality displaced subdivision surface quickly.

An Estimation of Price Elasticities of Import Demand and Export Supply Functions Derived from an Integrated Production Model (생산모형(生産模型)을 이용(利用)한 수출(輸出)·수입함수(輸入函數)의 가격탄성치(價格彈性値) 추정(推定))

  • Lee, Hong-gue
    • KDI Journal of Economic Policy
    • /
    • v.12 no.4
    • /
    • pp.47-69
    • /
    • 1990
  • Using an aggregator model, we look into the possibilities for substitution between Korea's exports, imports, domestic sales and domestic inputs (particularly labor), and substitution between disaggregated export and import components. Our approach heavily draws on an economy-wide GNP function that is similar to Samuelson's, modeling trade functions as derived from an integrated production system. Under the condition of homotheticity and weak separability, the GNP function would facilitate consistent aggregation that retains certain properties of the production structure. It would also be useful for a two-stage optimization process that enables us to obtain not only the net output price elasticities of the first-level aggregator functions, but also those of the second-level individual components of exports and imports. For the implementation of the model, we apply the Symmetric Generalized McFadden (SGM) function developed by Diewert and Wales to both stages of estimation. The first stage of the estimation procedure is to estimate the unit quantity equations of the second-level exports and imports that comprise four components each. The parameter estimates obtained in the first stage are utilized in the derivation of instrumental variables for the aggregate export and import prices being employed in the upper model. In the second stage, the net output supply equations derived from the GNP function are used in the estimation of the price elasticities of the first-level variables: exports, imports, domestic sales and labor. With these estimates in hand, we can come up with various elasticities of both the net output supply functions and the individual components of exports and imports. At the aggregate level (first-level), exports appear to be substitutable with domestic sales, while labor is complementary with imports. An increase in the price of exports would reduce the amount of the domestic sales supply, and a decrease in the wage rate would boost the demand for imports. On the other hand, labor and imports are complementary with exports and domestic sales in the input-output structure. At the disaggregate level (second-level), the price elasticities of the export and import components obtained indicate that both substitution and complement possibilities exist between them. Although these elasticities are interesting in their own right, they would be more usefully applied as inputs to the computational general equilibrium model.

  • PDF

The Optimal Configuration of Arch Structures Using Force Approximate Method (부재력(部材力) 근사해법(近似解法)을 이용(利用)한 아치구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;Ro, Min Lae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.95-109
    • /
    • 1993
  • In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.

  • PDF

Optimization of Hot Water Extraction Conditions of Wando Sea Tangle (Laminaria japonica) for Development of Natural Salt Enhancer (천연 염미증강제 개발을 위한 완도산 다시마의 열수 추출 조건 최적화 및 염미증강 효능 평가)

  • Kim, Hyo Ju;Yang, Eun Ju
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.44 no.5
    • /
    • pp.767-774
    • /
    • 2015
  • In recent decades, health concerns related to sodium intake have caused an increased demand for salt or sodium-reduced foods. Umami substance can enhance taste sensitivity to NaCl and may offer a unique approach to replace and reduce the sodium content in foods. In this study, hot water extraction conditions of Wando sea tangle with high umami taste were investigated. Wando sea tangle harvested in June was selected for hot water extraction based on its free amino acids composition. The quality properties of sea tangle extract were investigated at various extraction temperatures ($60^{\circ}C$, $80^{\circ}C$, and $100^{\circ}C$) and times (1 h, 2 h, and 3 h). Sea tangle extracts at the extraction temperature of $100^{\circ}C$ contained the highest soluble solids (35.47%~36.93%), and crude protein (3.75%~4.00%). Viscosities of sea tangle extracts decreased with increasing extraction temperature. Umami amino acids (glutamic acid and aspartic acid) and sensory characteristics were best at extraction conditions of $100^{\circ}C$ for 2 h. Saltiness enhancement of sea tangle extract powder was determined. Saltiness intensities of NaCl solution after adding 1% sea tangle extract powder were enhanced (1.84~4.25-fold). At the same saltiness intensity, sodium contents of NaCl solution with 1% sea tangle extract powder were 12.24~24.33% lower than that of NaCl solution. These results suggest that it is possible to reduce sodium in foods with sea tangle extract as a natural salt enhancer without lowering overall taste intensity.

Integrated Rotary Genetic Analysis Microsystem for Influenza A Virus Detection

  • Jung, Jae Hwan;Park, Byung Hyun;Choi, Seok Jin;Seo, Tae Seok
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2013.08a
    • /
    • pp.88-89
    • /
    • 2013
  • A variety of influenza A viruses from animal hosts are continuously prevalent throughout the world which cause human epidemics resulting millions of human infections and enormous industrial and economic damages. Thus, early diagnosis of such pathogen is of paramount importance for biomedical examination and public healthcare screening. To approach this issue, here we propose a fully integrated Rotary genetic analysis system, called Rotary Genetic Analyzer, for on-site detection of influenza A viruses with high speed. The Rotary Genetic Analyzer is made up of four parts including a disposable microchip, a servo motor for precise and high rate spinning of the chip, thermal blocks for temperature control, and a miniaturized optical fluorescence detector as shown Fig. 1. A thermal block made from duralumin is integrated with a film heater at the bottom and a resistance temperature detector (RTD) in the middle. For the efficient performance of RT-PCR, three thermal blocks are placed on the Rotary stage and the temperature of each block is corresponded to the thermal cycling, namely $95^{\circ}C$ (denature), $58^{\circ}C$ (annealing), and $72^{\circ}C$ (extension). Rotary RT-PCR was performed to amplify the target gene which was monitored by an optical fluorescent detector above the extension block. A disposable microdevice (10 cm diameter) consists of a solid-phase extraction based sample pretreatment unit, bead chamber, and 4 ${\mu}L$ of the PCR chamber as shown Fig. 2. The microchip is fabricated using a patterned polycarbonate (PC) sheet with 1 mm thickness and a PC film with 130 ${\mu}m$ thickness, which layers are thermally bonded at $138^{\circ}C$ using acetone vapour. Silicatreated microglass beads with 150~212 ${\mu}L$ diameter are introduced into the sample pretreatment chambers and held in place by weir structure for construction of solid-phase extraction system. Fig. 3 shows strobed images of sequential loading of three samples. Three samples were loaded into the reservoir simultaneously (Fig. 3A), then the influenza A H3N2 viral RNA sample was loaded at 5000 RPM for 10 sec (Fig. 3B). Washing buffer was followed at 5000 RPM for 5 min (Fig. 3C), and angular frequency was decreased to 100 RPM for siphon priming of PCR cocktail to the channel as shown in Figure 3D. Finally the PCR cocktail was loaded to the bead chamber at 2000 RPM for 10 sec, and then RPM was increased up to 5000 RPM for 1 min to obtain the as much as PCR cocktail containing the RNA template (Fig. 3E). In this system, the wastes from RNA samples and washing buffer were transported to the waste chamber, which is fully filled to the chamber with precise optimization. Then, the PCR cocktail was able to transport to the PCR chamber. Fig. 3F shows the final image of the sample pretreatment. PCR cocktail containing RNA template is successfully isolated from waste. To detect the influenza A H3N2 virus, the purified RNA with PCR cocktail in the PCR chamber was amplified by using performed the RNA capture on the proposed microdevice. The fluorescence images were described in Figure 4A at the 0, 40 cycles. The fluorescence signal (40 cycle) was drastically increased confirming the influenza A H3N2 virus. The real-time profiles were successfully obtained using the optical fluorescence detector as shown in Figure 4B. The Rotary PCR and off-chip PCR were compared with same amount of influenza A H3N2 virus. The Ct value of Rotary PCR was smaller than the off-chip PCR without contamination. The whole process of the sample pretreatment and RT-PCR could be accomplished in 30 min on the fully integrated Rotary Genetic Analyzer system. We have demonstrated a fully integrated and portable Rotary Genetic Analyzer for detection of the gene expression of influenza A virus, which has 'Sample-in-answer-out' capability including sample pretreatment, rotary amplification, and optical detection. Target gene amplification was real-time monitored using the integrated Rotary Genetic Analyzer system.

  • PDF

A Development of Automatic Lineament Extraction Algorithm from Landsat TM images for Geological Applications (지질학적 활용을 위한 Landsat TM 자료의 자동화된 선구조 추출 알고리즘의 개발)

  • 원중선;김상완;민경덕;이영훈
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.2
    • /
    • pp.175-195
    • /
    • 1998
  • Automatic lineament extraction algorithms had been developed by various researches for geological purpose using remotely sensed data. However, most of them are designed for a certain topographic model, for instance rugged mountainous region or flat basin. Most of common topographic characteristic in Korea is a mountainous region along with alluvial plain, and consequently it is difficult to apply previous algorithms directly to this area. A new algorithm of automatic lineament extraction from remotely sensed images is developed in this study specifically for geological applications. An algorithm, named as DSTA(Dynamic Segment Tracing Algorithm), is developed to produce binary image composed of linear component and non-linear component. The proposed algorithm effectively reduces the look direction bias associated with sun's azimuth angle and the noise in the low contrast region by utilizing a dynamic sub window. This algorithm can successfully accomodate lineaments in the alluvial plain as well as mountainous region. Two additional algorithms for estimating the individual lineament vector, named as ALEHHT(Automatic Lineament Extraction by Hierarchical Hough Transform) and ALEGHT(Automatic Lineament Extraction by Generalized Hough Transform) which are merging operation steps through the Hierarchical Hough transform and Generalized Hough transform respectively, are also developed to generate geological lineaments. The merging operation proposed in this study is consisted of three parameters: the angle between two lines($\delta$$\beta$), the perpendicular distance($(d_ij)$), and the distance between midpoints of lines(dn). The test result of the developed algorithm using Landsat TM image demonstrates that lineaments in alluvial plain as well as in rugged mountain is extremely well extracted. Even the lineaments parallel to sun's azimuth angle are also well detected by this approach. Further study is, however, required to accommodate the effect of quantization interval(droh) parameter in ALEGHT for optimization.