• Title/Summary/Keyword: Optimization Technique

Search Result 2,672, Processing Time 0.046 seconds

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

Optimization of fractionation efficiency (FE) and throughput (TP) in a large scale splitter less full-feed depletion SPLITT fractionation (Large scale FFD-SF) (대용량 splitter less full-feed depletion SPLITT 분획법 (Large scale FFD-SF)에서의 분획효율(FE)및 시료처리량(TP)의 최적화)

  • Eum, Chul Hun;Noh, Ahrahm;Choi, Jaeyeong;Yoo, Yeongsuk;Kim, Woon Jung;Lee, Seungho
    • Analytical Science and Technology
    • /
    • v.28 no.6
    • /
    • pp.453-459
    • /
    • 2015
  • Split-flow thin cell fractionation (SPLITT fractionation, SF) is a particle separation technique that allows continuous (and thus a preparative scale) separation into two subpopulations based on the particle size or the density. In SF, there are two basic performance parameters. One is the throughput (TP), which was defined as the amount of sample that can be processed in a unit time period. Another is the fractionation efficiency (FE), which was defined as the number % of particles that have the size predicted by theory. Full-feed depletion mode (FFD-SF) have only one inlet for the sample feed, and the channel is equipped with a flow stream splitter only at the outlet in SF mode. In conventional FFD-mode, it was difficult to extend channel due to splitter in channel. So, we use large scale splitter-less FFD-SF to increase TP from increase channel scale. In this study, a FFD-SF channel was developed for a large-scale fractionation, which has no flow stream splitters (‘splitter less’), and then was tested for optimum TP and FE by varying the sample concentration and the flow rates at the inlet and outlet of the channel. Polyurethane (PU) latex beads having two different size distribution (about 3~7 µm, and about 2~30 µm) were used for the test. The sample concentration was varied from 0.2 to 0.8% (wt/vol). The channel flow rate was varied from 70, 100, 120 and 160 mL/min. The fractionated particles were monitored by optical microscopy (OM). The sample recovery was determined by collecting the particles on a 0.1 µm membrane filter. Accumulation of relatively large micron sized particles in channel could be prevented by feeding carrier liquid. It was found that, in order to achieve effective TP, the concentration of sample should be at higher than 0.4%.

Optimization of Growth Medium and Poly-$\beta$-hydroxybutyric Acid Production from Methanol in Methylobacterium organophilum (메탄올로부터 Methylobacterium organophilum에 의한 Poly-$\beta$-hydroxybutyric Acid의 생산과 배지성분의 최적화)

  • Choi, Joon-H;Kim, Jung H.;M. Daniel;J.M. Lebeault
    • Microbiology and Biotechnology Letters
    • /
    • v.17 no.4
    • /
    • pp.392-396
    • /
    • 1989
  • Methylobacterium organophilum, a facultative methylotroph was cultivated on a methanol as a sole carbon and energy source. The cell growth was affected by the various components of minimal synthetic medium and the medium composition was optimized with 0.5% (v/v) methanol at pH 6.8 and at 3$0^{\circ}C$. The maximum specific growth rate of M. organophilum was achieved to 0.26 hr$^{-1}$ in the optimized medium which has following composition: Methanol, 0.5% (v/v):(NH$_4$)$_2$SO$_4$, 1.0g/l:KH$_2$PO$_4$, 2.13g/l:KH$_2$PO$_4$, 1.305g/ι:MgSO$_4$.7$H_2O$. 45g/l and trace elements (CaCl$_2$.2$H_2O$, 3.3mg:FeSO$_4$.7$H_2O$, 1.3mg:MnSO$_4$.4$H_2O$, 130$\mu\textrm{g}$:ZnSO$_4$.5$H_2O$, 40$\mu\textrm{g}$:Na$_2$MoO$_4$.2$H_2O$, 40$\mu\textrm{g}$:CoCl$_2$.6$H_2O$, 40$\mu\textrm{g}$:H$_3$BO$_3$, 30$\mu\textrm{g}$ per liter). By the limitation of nitrogen and deficiency of Mn$^{+2}$ or Fe$^{+2}$, the cell growth was significantly repressed. Methanol greatly repressed the cell growth and the complete inhibition was observed at concentration above 4% (v/v). In order to overcome the methanol inhibition and to prevent the methanol limitation, intermittent feeding of methanol was conducted by a D.O.-stat technique. PHB production by M. organophilum was stimulated by deficiency of nutrients such as NH$_{4}^{+}$, SO$_{4}^{-2}$, $Mg^{+2}$, $K^{+}$, or PO$_{4}^{-3}$ in the medium. The maximum PHB content was obtained as 58% of dry cell weight under deficiency of potassium ion in the optimized synthetic medium.

  • PDF

Process Optimization of Dextran Production by Leuconostoc sp. strain YSK. Isolated from Fermented Kimchi (김치로부터 분리된 Leuconostoc sp. strain YSK 균주에 의한 덱스트란 생산 조건의 최적화)

  • Hwang, Seung-Kyun;Hong, Jun-Taek;Jung, Kyung-Hwan;Chang, Byung-Chul;Hwang, Kyung-Suk;Shin, Jung-Hee; Yim, Sung-Paal;Yoo, Sun-Kyun
    • Journal of Life Science
    • /
    • v.18 no.10
    • /
    • pp.1377-1383
    • /
    • 2008
  • A bacterium producing non- or partially digestible dextran was isolated from kimchi broth by enrichment culture technique. The bacterium was identified tentatively as Leuconostoc sp. strain SKY. We established the response surface methodology (Box-Behnken design) to optimize the principle parameters such as culture pH, temperature, and yeast extract concentration for maximizing production of dextran. The ranges of parameters were determined based on prior screening works done at our laboratory and accordingly chosen as 5.5, 6.5, and 7.5 for pH, 25, 30, and $35^{\circ}C$ for temperature, and 1, 5, and 9 g/l yeast extract. Initial concentration of sucrose was 100 g/l. The mineral medium consisted of 3.0 g $KH_2PO_4$, 0.01 g $FeSO_4{\cdot}H_2O$, 0.01 g $MnSO_4{\cdot}4H_2O$, 0.2 g $MgSO_4{\cdot}7H_2O$, 0.01 g NaCl, and 0.05 g $CaCO_3$ per 1 liter deionized water. The optimum values of pH and temperature, and yeast extract concentration were obtained at pH (around 7.0), temperature (27 to $28^{\circ}C$), and yeast extract (6 to 7 g/l). The best dextran yield was 60% (dextran/g sucrose). The best dextran productivity was 0.8 g/h-l.

Computer Simulations of Hoffman Brain Phantom:Sensitivity Measurements and Optimization of Data Analysis of 〔Tc-99m〕ECD SPECT Before and After Acftazolamide Administraton (Acetazolamide 사용전후 〔Tc-99m〕 EDC SPECT 데이타 분석 방법의 최적화 및 민감도 측정)

  • Kim, Hee-Joung;Lee, Hee-Kyung
    • Progress in Medical Physics
    • /
    • v.6 no.2
    • /
    • pp.71-81
    • /
    • 1995
  • Consecutive brain 〔Tc-99m〕ECD SPECT studies before and after acetazolamide (Diamox) administration have been performed with patients for the evaluation of cerebrovascular hemodynamic reserve. However, the quantitaitve potential of SPECT Diamox imaging is limited as a result of degrading fractors such as finite detector resolution, attenuation, scatter, poor counting statistics, and methods of data analysis. Making physical measurements in phantoms filled with known amounts of radioactivity can help characterize and potentially quantify the sensitivities. However, it is often very difficult to make a realistic phantom simulating patients in clinical situations. By computer simulation, we studied the sensitivities of ECD SPECT before and after Diamox administration. The sensitivity is defined as ($\Delta$N/N)/($\Delta$S/S)$\times$100%, where $\Delta$N denotes the differences in mean counts between post-and pre-Diamox in the measured data, N denotes the mean counts before Diamox in the measure data, $\Delta$S denotes the differences in mean counts between post-and pre-Diamox in the model, and S denotes the mean counts before Diamox in the model. In clinical Diamox studies, the percentage changes of radioactivity could be determined to measure changes in radioactivity concentration by Diamox after subtracting pre-from post-Diamox data. However, the optimal amount of subtraction for 100% sensitivity is not known since this requires a thorough sensitivity analysis by computer simulation. For consecutive brain SPECT imaging model before and after Diamox, when 30% increased radioactivity concentrations were assingned for Diamox effect in model, the sensitivities were measured as 51.03, 73.4, 94.00, 130.74% for 0, 100, 150, 200% subtraction, respectively. Sensitivity analysis indicated that the partial voluming effects due to finite detector resolution and statistical noise result in a significant underestimation of radioactivity measurements and the amount of underestimation depends on the. % increase of radioactivity concentration and % subtraction of pre-from post-Diamox data. The 150% subtraction appears to be optimal in clinical situations where we expect approximately 30% changes in radioactivity concentration. The computer simulation may be a powerful technique to study sensitivities of ECD SPECT before and after Diamox administration.

  • PDF

Study on an Effective Decellularization Technique for Cardiac Valve, Arterial Wall and Pericardium Xenographs: Optimization of Decellularization (이종 심장 판막 및 대혈관 이식편과 심낭에서 효과적인 탈세포화 방법에 관한 연구: 탈세포화의 최적화)

  • Park, Chun-Soo;Kim, Yong-Jin;Sung, Si-Chan;Park, Ji-Eun;Choi, Sun-Young;Kim, Woong-Han;Kim, Kyung-Hwan
    • Journal of Chest Surgery
    • /
    • v.41 no.5
    • /
    • pp.550-562
    • /
    • 2008
  • Background: We attempted to reproduce a previously reported method that is known to be effective for decellularization, and we sought to find the optimal condition for decellularization by introducing some modifications to this method. Material and Method: Porcine semilunar valves, arterial walls and pericardium were processed for decellularization with using a variety of combinations and concentrations of decellularizing agents under different conditions of temperature, osmolarity and incubation time. The degree of decellularization and the preservation of the extracellular matrix were evaluated by staining with hematoxylin and eosin and with alpha-Gal and DAPI in some of the decellularized tissues. Result: Decellularization was achieved in the specimens that were treated with sodium deoxycholate, sodium dodesyl sulfate, Triton X-100 and sodium dodesyl sulfate with Triton X-100 as single-step methods, and this was also achieved in the specimens that were treated with hypotonic solution ${\rightarrow}$ Triton X-100 ${\rightarrow}$ sodium dodesyl sulfate, sodium deoxycholate ${\rightarrow}$ hypotonic solution ${\rightarrow}$ sodium dodesyl sulfate, and hypotonic solution sodium dodesyl sulfate as multi-step methods. Conclusion: Considering the number and the amount of the chemicals that were used, the incubation time and the degree of damage to the extracellular matrix, a single-step method with sodium dodesyl sulfate and Triton X-100 and a multi-step method with hypotonic solution followed by sodium dodesyl sulfate were both relatively optimal methods for decellularization in this study.

Identification of the Environmentally Problematic Input/Environmental Emissions and Selection of the Optimum End-of-pipe Treatment Technologies of the Cement Manufacturing Process (시멘트 제조공정의 환경적 취약 투입물/환경오염물 파악 및 최적종말처리 공정 선정)

  • Lee, Joo-Young;Kim, Yoon-Ha;Lee, Kun-Mo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.39 no.8
    • /
    • pp.449-455
    • /
    • 2017
  • Process input data including material and energy, process output data including product, co-product and its environmental emissions of the reference and target processes were collected and analyzed to evaluate the process performance. Environmentally problematic input/environmental emissions of the manufacturing processes were identified using these data. Significant process inputs contributing to each of the environmental emissions were identified using multiple regression analysis between the process inputs and environmental emissions. Optimum combination of the end-of-pipe technologies for treating the environmental emissions considering economic aspects was made using the linear programming technique. The cement manufacturing processes in Korea and the EU producing same type of cement were chosen for the case study. Environmentally problematic input/environmental emissions of the domestic cement manufacturing processes include coal, dust, and $SO_x$. Multiple regression analysis among the process inputs and environmental emissions revealed that $CO_2$ emission was influenced most by coal, followed by the input raw materials and gypsum. $SO_x$ emission was influenced by coal, and dust emission by gypsum followed by raw material. Optimization of the end-of-pipe technologies treating dust showed that a combination of 100% of the electro precipitator and 2.4% of the fiber filter gives the lowest cost. The $SO_x$ case showed that a combination of 100% of the dry addition process and 25.88% of the wet scrubber gives the lowest cost. Salient feature of this research is that it proposed a method for identifying environmentally problematic input/environmental emissions of the manufacturing processes, in particular, cement manufacturing process. Another feature is that it showed a method for selecting the optimum combination of the end-of-pipe treatment technologies.

The Optimal Configuration of Arch Structures Using Force Approximate Method (부재력(部材力) 근사해법(近似解法)을 이용(利用)한 아치구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;Ro, Min Lae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.95-109
    • /
    • 1993
  • In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.

  • PDF

A Study on the Forecasting Model on Market Share of a Retail Facility -Focusing on Extension of Interaction Model- (유통시설의 시장점유율 예측 모델에 관한 연구 -상호작용 모델의 확장을 중심으로)

  • 최민성
    • Journal of Distribution Research
    • /
    • v.5 no.2
    • /
    • pp.49-68
    • /
    • 2001
  • In this chapter, we summarize the results on the optimal location selection and present limitation and direction of research. In order to reach the objective, this study selected and tested the interaction model which obtains the value of co-ordinates on location selection through the optimization technique. This study used the original variables in the model, but the results indicated that there is difference in reality. In order to overcome this difference, this study peformed market survey and found the new variables (first data such as price, quality and assortment of goods, and the second data such as aggregate area, and area of shop, and the number of cars in the parking lot). Then this study determined an optimal variable by empirical analysis which compares an actual value of market share in 1988 with the market share yielded in the model. However, this study found the market share in each variables does not reflect a reality due to an assumption of λ-value in the model. In order to improve this, this study performed a sensitivity analysis which adds the λ value from 1.0 to 2.9 marginally. The analyzed result indicated the highest significance with the market share ratio in 1998 at λ of 1.0. Applying the weighted value to a variable from each of the first data and second data yielded the results that more variables from the first data coincided with the realistic rank on sales. Although this study have some limits and improvements, if a marketer uses this extended model, more significant results will be produced.

  • PDF