• Title/Summary/Keyword: optimal approximation

Search Result 476, Processing Time 0.052 seconds

Measurements and Statistical Modeling of Ignition Noise from Vehicle (자동차 점화계통에서 발생된 전자파 잡음의 측정 및 통계적 모형)

  • 김종호;윤현보;백락준;우종우
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.8 no.4
    • /
    • pp.390-402
    • /
    • 1997
  • The ignition noises from the vehicle are measured and the measurement data are statistically treated for modeling. The low-noise amplifier and band-pass filter are added between the receiver and the three axes antenna for low noise level measurement and the APD and PSD are measured in 800 MHz frequency range. The measured APD curves can be expressed in terms of sensitivity study of each model through 3(class A) or 6(Class B) parameters variation, and these optimal parameter can easily be calculated by using the Composite Approximation Algorithm. The selected APD parameter can be used for making the Data Base EM-environments and also applied to determine the output and sensitivity margin for the transmitter and receiver. 'Digital microwave transmission systems are equipped with equalizer against fading during multipath fading. In this paper, we proposed variable reference tap position equalizer that varies the reference tap according to fading type to archive better performance. We got the perf?mance improvement about 4~5 dB in MP condition and 2~3 dB in NMP condition from simulation results.

  • PDF

Self-Organizing Polynomial Neural Networks Based on Genetically Optimized Multi-Layer Perceptron Architecture

  • Park, Ho-Sung;Park, Byoung-Jun;Kim, Hyun-Ki;Oh, Sung-Kwun
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.4
    • /
    • pp.423-434
    • /
    • 2004
  • In this paper, we introduce a new topology of Self-Organizing Polynomial Neural Networks (SOPNN) based on genetically optimized Multi-Layer Perceptron (MLP) and discuss its comprehensive design methodology involving mechanisms of genetic optimization. Let us recall that the design of the 'conventional' SOPNN uses the extended Group Method of Data Handling (GMDH) technique to exploit polynomials as well as to consider a fixed number of input nodes at polynomial neurons (or nodes) located in each layer. However, this design process does not guarantee that the conventional SOPNN generated through learning results in optimal network architecture. The design procedure applied in the construction of each layer of the SOPNN deals with its structural optimization involving the selection of preferred nodes (or PNs) with specific local characteristics (such as the number of input variables, the order of the polynomials, and input variables) and addresses specific aspects of parametric optimization. An aggregate performance index with a weighting factor is proposed in order to achieve a sound balance between the approximation and generalization (predictive) abilities of the model. To evaluate the performance of the GA-based SOPNN, the model is experimented using pH neutralization process data as well as sewage treatment process data. A comparative analysis indicates that the proposed SOPNN is the model having higher accuracy as well as more superb predictive capability than other intelligent models presented previously.reviously.

Robust Recurrent Wavelet Interval Type-2 Fuzzy-Neural-Network Control for DSP-Based PMSM Servo Drive Systems

  • El-Sousy, Fayez F.M.
    • Journal of Power Electronics
    • /
    • v.13 no.1
    • /
    • pp.139-160
    • /
    • 2013
  • In this paper, an intelligent robust control system (IRCS) for precision tracking control of permanent-magnet synchronous motor (PMSM) servo drives is proposed. The IRCS comprises a recurrent wavelet-based interval type-2 fuzzy-neural-network controller (RWIT2FNNC), an RWIT2FNN estimator (RWIT2FNNE) and a compensated controller. The RWIT2FNNC combines the merits of a self-constructing interval type-2 fuzzy logic system, a recurrent neural network and a wavelet neural network. Moreover, it performs the structure and parameter-learning concurrently. The RWIT2FNNC is used as the main tracking controller to mimic the ideal control law (ICL) while the RWIT2FNNE is developed to approximate an unknown dynamic function including the lumped parameter uncertainty. Furthermore, the compensated controller is designed to achieve $L_2$ tracking performance with a desired attenuation level and to deal with uncertainties including approximation errors, optimal parameter vectors and higher order terms in the Taylor series. Moreover, the adaptive learning algorithms for the compensated controller and the RWIT2FNNE are derived by using the Lyapunov stability theorem to train the parameters of the RWIT2FNNE online. A computer simulation and an experimental system are developed to validate the effectiveness of the proposed IRCS. All of the control algorithms are implemented on a TMS320C31 DSP-based control computer. The simulation and experimental results confirm that the IRCS grants robust performance and precise response regardless of load disturbances and PMSM parameters uncertainties.

A Novel Bit Allocation Method Using Two-phase Optimization Technique (2단계 최적화 방법을 이용한 비트할당 기법)

  • 김욱중;김성대
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.8
    • /
    • pp.2032-2041
    • /
    • 1998
  • In this work, we propose a novel bit allocation method that is to minimize overall distortions subject ot the bit rate constraint. We partition the original bitallocation problem into 'macroblock level bit allocation' problems that can be solved by conventional Lagrangian mutiplier methods and a 'frame level bit allocation' problem. To tackle the frame level problem, 'two-phase optimization' algorithm is used with iter-frame dependency model. While the existing approaches are almost impossible to find the macroblock-unit result for the moving picture coding system due to high computational complexity, the proposed algorithm can drastically reduce the computational loads by the problem partitioning and can obtain the result close to the optimal solution. Because the optimally allocated results can be used as a benchmark for bit allocation methods, the upper performance limit, or a basis for approximation method development, we expect that the proposed algorithm can be very useful for the bit allocation related works.

  • PDF

Understanding of 3D Human Body Motion based on Mono-Vision (단일 비전 기반 인체의 3차원 운동 해석)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.193-200
    • /
    • 2011
  • This paper proposes a low-cost visual analyzer algorithm of human body motion for real-time applications such as human-computer interfacing, virtual reality applications in medicine and telemonitoring of patients. To reduce cost of its use, we design the algorithm to use a single camera. To make the proposed system to be used more conveniently, we avoid from using optical markers. To make the proposed algorithm be convenient for real-time applications, we design it to have a closed-form with high accuracy. To design a closed-form algorithm, we propose an idea that formulates motion of a human body joint as a 2D universal joint model instead of a common 3D spherical joint model, without any kins of approximation. To make the closed-form algorithm has high accuracy, we formulates the estimation process to be an optimization problem. Thus-desined algorithm is applied to each joint of the human body one after another. Through experiments we show that human body motion capturing can be performed in an efficient and robust manner by using our algorithm.

Bit Split Algorithm for Applying the Multilevel Modulation of Iterative codes (반복부호의 멀티레벨 변조방식 적용을 위한 비트분리 알고리즘)

  • Park, Tae-Doo;Kim, Min-Hyuk;Kim, Nam-Soo;Jung, Ji-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1654-1665
    • /
    • 2008
  • This paper presents bit splitting methods to apply multilevel modulation to iterative codes such as turbo code, low density parity check code and turbo product code. Log-likelihood ratio method splits multilevel symbols to soft decision symbols using the received in-phase and quadrature component based on Gaussian approximation. However it is too complicate to calculate and to implement hardware due to exponential and logarithm calculation. Therefore this paper presents Euclidean, MAX, sector and center focusing method to reduce the high complexity of LLR method. Also, this paper proposes optimal soft symbol split method for three kind of iterative codes. Futhermore, 16-APSK modulator method with double ring structure for applying DVB-S2 system and 16-QAM modulator method with lattice structure for T-DMB system are also analyzed.

Development of Independent Target Approximation by Auto-computation of 3-D Distribution Units for Stereotactic Radiosurgery (정위적 방사선 수술시 3차원적 공간상 단위분포들의 자동계산법에 의한 간접적 병소 근사화 방법의 개발)

  • Choi Kyoung Sik;Oh Seung Jong;Lee Jeong Woo;Kim Jeung Kee;Suh Tae Suk;Choe Bo Young;Kim Moon Chan;Chung Hyun-Tai
    • Progress in Medical Physics
    • /
    • v.16 no.1
    • /
    • pp.24-31
    • /
    • 2005
  • The stereotactic radiosurgery (SRS) describes a method of delivering a high dose of radiation to a small tar-get volume in the brain, generally in a single fraction, while the dose delivered to the surrounding normal tissue should be minimized. To perform automatic plan of the SRS, a new method of multi-isocenter/shot linear accelerator (linac) and gamma knife (GK) radiosurgery treatment plan was developed, based on a physical lattice structure in target. The optimal radiosurgical plan had been constructed by many beam parameters in a linear accelerator or gamma knife-based radiation therapy. In this work, an isocenter/shot was modeled as a sphere, which is equal to the circular collimator/helmet hole size because the dimension of the 50% isodose level in the dose profile is similar to its size. In a computer-aided system, it accomplished first an automatic arrangement of multi-isocenter/shot considering two parameters such as positions and collimator/helmet sizes for each isocenter/shot. Simultaneously, an irregularly shaped target was approximated by cubic structures through computation of voxel units. The treatment planning method by the technique was evaluated as a dose distribution by dose volume histograms, dose conformity, and dose homogeneity to targets. For irregularly shaped targets, the new method performed optimal multi-isocenter packing, and it only took a few seconds in a computer-aided system. The targets were included in a more than 50% isodose curve. The dose conformity was ordinarily acceptable levels and the dose homogeneity was always less than 2.0, satisfying for various targets referred to Radiation Therapy Oncology Group (RTOG) SRS criteria. In conclusion, this approach by physical lattice structure could be a useful radiosurgical plan without restrictions in the various tumor shapes and the different modality techniques such as linac and GK for SRS.

  • PDF

Control of pH Neutralization Process using Simulation Based Dynamic Programming in Simulation and Experiment (ICCAS 2004)

  • Kim, Dong-Kyu;Lee, Kwang-Soon;Yang, Dae-Ryook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.620-626
    • /
    • 2004
  • For general nonlinear processes, it is difficult to control with a linear model-based control method and nonlinear controls are considered. Among the numerous approaches suggested, the most rigorous approach is to use dynamic optimization. Many general engineering problems like control, scheduling, planning etc. are expressed by functional optimization problem and most of them can be changed into dynamic programming (DP) problems. However the DP problems are used in just few cases because as the size of the problem grows, the dynamic programming approach is suffered from the burden of calculation which is called as 'curse of dimensionality'. In order to avoid this problem, the Neuro-Dynamic Programming (NDP) approach is proposed by Bertsekas and Tsitsiklis (1996). To get the solution of seriously nonlinear process control, the interest in NDP approach is enlarged and NDP algorithm is applied to diverse areas such as retailing, finance, inventory management, communication networks, etc. and it has been extended to chemical engineering parts. In the NDP approach, we select the optimal control input policy to minimize the value of cost which is calculated by the sum of current stage cost and future stages cost starting from the next state. The cost value is related with a weight square sum of error and input movement. During the calculation of optimal input policy, if the approximate cost function by using simulation data is utilized with Bellman iteration, the burden of calculation can be relieved and the curse of dimensionality problem of DP can be overcome. It is very important issue how to construct the cost-to-go function which has a good approximate performance. The neural network is one of the eager learning methods and it works as a global approximator to cost-to-go function. In this algorithm, the training of neural network is important and difficult part, and it gives significant effect on the performance of control. To avoid the difficulty in neural network training, the lazy learning method like k-nearest neighbor method can be exploited. The training is unnecessary for this method but requires more computation time and greater data storage. The pH neutralization process has long been taken as a representative benchmark problem of nonlin ar chemical process control due to its nonlinearity and time-varying nature. In this study, the NDP algorithm was applied to pH neutralization process. At first, the pH neutralization process control to use NDP algorithm was performed through simulations with various approximators. The global and local approximators are used for NDP calculation. After that, the verification of NDP in real system was made by pH neutralization experiment. The control results by NDP algorithm was compared with those by the PI controller which is traditionally used, in both simulations and experiments. From the comparison of results, the control by NDP algorithm showed faster and better control performance than PI controller. In addition to that, the control by NDP algorithm showed the good results when it applied to the cases with disturbances and multiple set point changes.

  • PDF

Optimization of the Truss Structures Using Member Stress Approximate method (응력근사해법(應力近似解法)을 이용한 평면(平面)트러스구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;You, Hee Jung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.73-84
    • /
    • 1993
  • In this research, configuration design optimization of plane truss structure has been tested by using decomposition technique. In the first level, the problem of transferring the nonlinear programming problem to linear programming problem has been effectively solved and the number of the structural analysis necessary for doing the sensitivity analysis can be decreased by developing stress constraint into member stress approximation according to the design space approach which has been proved to be efficient to the sensitivity analysis. And the weight function has been adopted as cost function in order to minimize structures. For the design constraint, allowable stress, buckling stress, displacement constraint under multi-condition and upper and lower constraints of the design variable are considered. In the second level, the nodal point coordinates of the truss structure are used as coordinating variable and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, unconstrained optimal design problems are easy to solve. The decomposition method which optimize the section areas in the first level and optimize configuration variables in the second level was applied to the plane truss structures. The numerical comparisons with results which are obtained from numerical test for several truss structures with various shapes and any design criteria show that convergence rate is very fast regardless of constraint types and configuration of truss structures. And the optimal configuration of the truss structures obtained in this study is almost the identical one from other results. The total weight couldbe decreased by 5.4% - 15.4% when optimal configuration was accomplished, though there is some difference.

  • PDF

Simulation of eccentricity effects on short- and long-normal logging measurements using a Fourier-hp-finite-element method (Self-adaptive hp 유한요소법을 이용한 단.장노말 전기검층에서 손데의 편향 효과 수치모델링)

  • Nam, Myung-Jin;Pardo, David;Torres-Verdin, Carlos;Hwang, Se-Ho;Park, Kwon-Gyu;Lee, Chang-Hyun
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.1
    • /
    • pp.118-127
    • /
    • 2010
  • Resistivity logging instruments are designed to measure the electrical resistivity of a formation, and this can be directly interpreted to provide a water-saturation profile. However, resistivity logs are sensitive to borehole and shoulder-bed effects, which often result in misinterpretation of the results. These effects are emphasised more in the presence of tool eccentricity. For precise interpretation of short- and long-normal logging measurements in the presence of tool eccentricity, we simulate and analyse eccentricity effects by combining the use of a Fourier series expansion in a new system of coordinates with a 2D goal-oriented high-order self-adaptive hp finite-element refinement strategy, where h denotes the element size and p the polynomial order of approximation within each element. The algorithm automatically performs local mesh refinement to construct an optimal grid for the problem under consideration. In addition, the proper combination of h and p refinements produces highly accurate simulations even in the presence of high electrical resistivity contrasts. Numerical results demonstrate that our algorithm provides highly accurate and reliable simulation results. Eccentricity effects are more noticeable when the borehole is large or resistive, or when the formation is highly conductive.