• Title/Summary/Keyword: Discretization Algorithm

Search Result 124, Processing Time 0.023 seconds

Development of Advanced Numerical techniques to Reduce Grid Dependency in Industrial CFD Applications

  • Blahowsky Hans Peter
    • 한국전산유체공학회:학술대회논문집
    • /
    • 1998.11a
    • /
    • pp.19-22
    • /
    • 1998
  • Automatic mesh generation procedures applied to industrial now problems lead to complex mesh topologies where usually no special considerations to mesh resolution are taken. In the present study a fast and flexible solution algorithm in combination with generalized higher order discretization schemes is presented and its application to intake port calculation is demonstrated.

  • PDF

Ripple Analysis and Control of Electric Multiple Unit Traction Drives under a Fluctuating DC Link Voltage

  • Diao, Li-Jun;Dong, Kan;Yin, Shao-Bo;Tang, Jing;Chen, Jie
    • Journal of Power Electronics
    • /
    • v.16 no.5
    • /
    • pp.1851-1860
    • /
    • 2016
  • The traction motors in electric multiple unit (EMU) trains are powered by AC-DC-AC converters, and the DC link voltage is generated by single phase PWM converters, with a fluctuation component under twice the frequency of the input catenary AC grid, which causes fluctuations in the motor torque and current. Traditionally, heavy and low-efficiency hardware LC resonant filters parallel in the DC side are adopted to reduce the ripple effect. In this paper, an analytical model of the ripple phenomenon is derived and analyzed in the frequency domain, and a ripple control scheme compensating the slip frequency of rotor vector control systems without a hardware filter is applied to reduce the torque and current ripple amplitude. Then a relatively simple discretization method is chosen to discretize the algorithm with a high discrete accuracy. Simulation and experimental results validate the proposed ripple control strategy.

Time-Discretization of Delayed Multi-Input Nonlinear System Using A new algorithm

  • Qiang, Zhang;Zhang, Zheng;Kim, Sung-Jung;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.89-91
    • /
    • 2007
  • In this paper, a new approach for a sampled-data representation of nonlinear system that has time-delayed multi-input is proposed. That is largely devoid of illconditioning and is suitable for any nonlinear problem. The new scheme is applied to nonlinear systems with two or three inputs; and then the delayed multi-input general equation is derived. The method is based on thematrix exponential theory. Itdoes not require excessive computational resources and lends itself to a short and robust piece of software that can be easily inserted into large simulation packages. A performance of the proposed method is evaluated using a nonlinear system with time-delay: maneuvering an automobile.

  • PDF

NUMERICAL SOLUTIONS OF BURGERS EQUATION BY REDUCED-ORDER MODELING BASED ON PSEUDO-SPECTRAL COLLOCATION METHOD

  • SEO, JEONG-KWEON;SHIN, BYEONG-CHUN
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.19 no.2
    • /
    • pp.123-135
    • /
    • 2015
  • In this paper, a reduced-order modeling(ROM) of Burgers equations is studied based on pseudo-spectral collocation method. A ROM basis is obtained by the proper orthogonal decomposition(POD). Crank-Nicolson scheme is applied in time discretization and the pseudo-spectral element collocation method is adopted to solve linearlized equation based on the Newton method in spatial discretization. We deliver POD-based algorithm and present some numerical experiments to show the efficiency of our proposed method.

Polygonal finite element modeling of crack propagation via automatic adaptive mesh refinement

  • Shahrezaei, M.;Moslemi, H.
    • Structural Engineering and Mechanics
    • /
    • v.75 no.6
    • /
    • pp.685-699
    • /
    • 2020
  • Polygonal finite element provides a great flexibility in mesh generation of crack propagation problems where the topology of the domain changes significantly. However, the control of the discretization error in such problems is a main concern. In this paper, a polygonal-FEM is presented in modeling of crack propagation problems via an automatic adaptive mesh refinement procedure. The adaptive mesh refinement is accomplished based on the Zienkiewicz-Zhu error estimator in conjunction with a weighted SPR technique. Adaptive mesh refinement is employed in some steps for reduction of the discretization error and not for tracking the crack. In the steps that no adaptive mesh refinement is required, local modifications are applied on the mesh to prevent poor polygonal element shapes. Finally, several numerical examples are analyzed to demonstrate the efficiency, accuracy and robustness of the proposed computational algorithm in crack propagation problems.

A PARALLEL FINITE ELEMENT ALGORITHM FOR SIMULATION OF THE GENERALIZED STOKES PROBLEM

  • Shang, Yueqiang
    • Bulletin of the Korean Mathematical Society
    • /
    • v.53 no.3
    • /
    • pp.853-874
    • /
    • 2016
  • Based on a particular overlapping domain decomposition technique, a parallel finite element discretization algorithm for the generalized Stokes equations is proposed and investigated. In this algorithm, each processor computes a local approximate solution in its own subdomain by solving a global problem on a mesh that is fine around its own subdomain and coarse elsewhere, and hence avoids communication with other processors in the process of computations. This algorithm has low communication complexity. It only requires the application of an existing sequential solver on the global meshes associated with each subdomain, and hence can reuse existing sequential software. Numerical results are given to demonstrate the effectiveness of the parallel algorithm.

ICAIM;An Improved CAIM Algorithm for Knowledge Discovery

  • Yaowapanee, Piriya;Pinngern, Ouen
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.2029-2032
    • /
    • 2004
  • The quantity of data were rapidly increased recently and caused the data overwhelming. This led to be difficult in searching the required data. The method of eliminating redundant data was needed. One of the efficient methods was Knowledge Discovery in Database (KDD). Generally data can be separate into 2 cases, continuous data and discrete data. This paper describes algorithm that transforms continuous attributes into discrete ones. We present an Improved Class Attribute Interdependence Maximization (ICAIM), which designed to work with supervised data, for discretized process. The algorithm does not require user to predefine the number of intervals. ICAIM improved CAIM by using significant test to determine which interval should be merged to one interval. Our goal is to generate a minimal number of discrete intervals and improve accuracy for classified class. We used iris plant dataset (IRIS) to test this algorithm compare with CAIM algorithm.

  • PDF

Searching Algorithm for Finite Element Analysis of 2-D Contact Problems (2차원 접촉문제의 유한요소 해석을 위한 탐색알고리즘)

  • 장동환;최호준;고병두;조승한;황병복
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.12
    • /
    • pp.148-158
    • /
    • 2003
  • In this paper, efficient and accurate contact search algorithm is proposed for the contact problems by the finite element method. A slave node and a maser contact segment is defined using the side of a finite element on the contact surface. The specific goal is to develop techniques of reducing the nonsmoothness of the contact interactions arising from the finite element discretization of the contact surface. Contact detection is accomplished by monitoring the territory of the slave nodes throughout the calculation for possible penetration of a master surface. To establish the validity of the proposed algorithm, some different process and geometries examples were simulated. Efforts are focused on the error rate that is based on the penetrated area through the simulations fur large deformation with contact surface between deformable bodies. A proposed algorithm offers improvements in contact detection from the simulation results.

An algorithm for simulation of cyclic eccentrically-loaded RC columns using fixed rectangular finite elements discretization

  • Sadeghi, Kabir;Nouban, Fatemeh
    • Computers and Concrete
    • /
    • v.23 no.1
    • /
    • pp.25-36
    • /
    • 2019
  • In this paper, an algorithm is presented to simulate numerically the reinforced concrete (RC) columns having any geometric form of section, loaded eccentrically along one or two axes. To apply the algorithm, the columns are discretized into two macro-elements (MEs) globally and the critical sections of columns are discretized into fixed rectangular finite elements locally. A proposed triple simultaneous dichotomy convergence method is applied to find the equilibrium state in the critical section of the column considering the three strains at three corners of the critical section as the main characteristic variables. Based on the proposed algorithm a computer program has been developed for simulation of the nonlinear behavior of the eccentrically-loaded columns. A good agreement has been witnessed between the results obtained applying the proposed algorithm and the experimental test results. The simulated results indicate that the ultimate strength and stiffness of the RC columns increase with the increase in axial force value, but large axial loads reduce the ductility of the column, make it brittle, impose great loss of material, and cause early failure.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.