• Title/Summary/Keyword: Noisy optimization

Search Result 49, Processing Time 0.021 seconds

MRAS Based Speed Estimator for Sensorless Vector Control of a Linear Induction Motor with Improved Adaptation Mechanisms

  • Holakooie, Mohammad Hosein;Taheri, Asghar;Sharifian, Mohammad Bagher Bannae
    • Journal of Power Electronics
    • /
    • v.15 no.5
    • /
    • pp.1274-1285
    • /
    • 2015
  • This paper deals with model reference adaptive system (MRAS) speed estimators based on a secondary flux for linear induction motors (LIMs). The operation of these estimators significantly depends on an adaptation mechanism. Fixed-gain PI controller is the most common adaptation mechanism that may fail to estimate the speed correctly in different conditions, such as variation in machine parameters and noisy environment. Two adaptation mechanisms are proposed to improve LIM drive system performance, particularly at very low speed. The first adaptation mechanism is based on fuzzy theory, and the second is obtained from an LIM mechanical model. Compared with a conventional PI controller, the proposed adaptation mechanisms have low sensitivity to both variations of machine parameters and noise. The optimum parameters of adaptation mechanisms are tuned using an offline method through chaotic optimization algorithm (COA) because no design criterion is given to provide these values. The efficiency of MRAS speed estimator is validated by both numerical simulation and real-time hardware-in-the-loop (HIL) implementations. Results indicate that the proposed adaptation mechanisms improve performance of MRAS speed estimator.

Optimal Optical Mouse Array for High Performance Mobile Robot Velocity Estimation (이동로봇 속도 추정 성능 향상을 위한 광 마우스의 최적 배열)

  • Kim, Sungbok;Kim, Hyunbin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.555-562
    • /
    • 2013
  • This paper presents the optimal array of optical mice for the accurate velocity estimation of a mobile robot. It is assumed that there can be some restriction on the installation of two or more optical mice at the bottom of a mobile robot. First, the velocity kinematics of a mobile robot with an array of optical mice is derived, which maps the velocity of a mobile robot to the velocities of optical mice. Second, taking into account the consistency in physical units, the uncertainty ellipsoid is obtained to represent the error characteristics of the mobile robot velocity estimation owing to noisy optical mouse measurements. Third, a simple but effective performance index is defined as the inverse of the volume of the uncertainty ellipsoid, which can be used for the optimization of the optimal optical mouse placement. Fourth, simulation results for the optimal placement of three optical mice within a given elliptical region are given.

A Novel Expectation-Maximization based Channel Estimation for OFDM Systems (Expectation-Maximization 기반의 새로운 OFDM 채널 추정 방식)

  • Kim, Nam-Kyeom;Sohn, In-Soo;Shin, Jae-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.4C
    • /
    • pp.397-402
    • /
    • 2009
  • Accurate estimation of time-selective fading channel is a difficult problem in OFDM(Orthogonal Frequency Division Multiplexing) system. There are many channel estimation algorithms that are very weak in noisy channel. For solving this problem, we use EM (Expectation-Maximization) algorithm for iterative optimization of the data. We propose an EM-LPC algorithm to estimate the time-selective fading. The proposed algorithm improves of the BER performance compared to EM based channel estimation algorithm and reduces the iteration number of the EM loop. We simulated the uncoded system. If coded system use the EM-LPC algorithm, the performance are enhanced because of the coding gain. The EM-LPC algorithm is able to apply to another communication system, not only OFDM systems. The image processing of the medical instruments that the demand of accurate estimation can also use the proposed algorithm.

Enhancing Focus Measurements in Shape From Focus Through 3D Weighted Least Square (3차원 가중최소제곱을 이용한 SFF에서의 초점 측도 개선)

  • Mahmood, Muhammad Tariq;Ali, Usman;Choi, Young Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.3
    • /
    • pp.66-71
    • /
    • 2019
  • In shape from focus (SFF) methods, the quality of image focus volume plays a vital role in the quality of 3D shape reconstruction. Traditionally, a linear 2D filter is applied to each slice of the image focus volume to rectify the noisy focus measurements. However, this approach is problematic because it also modifies the accurate focus measurements that should ideally remain intact. Therefore, in this paper, we propose to enhance the focus volume adaptively by applying 3-dimensional weighted least squares (3D-WLS) based regularization. We estimate regularization weights from the guidance volume extracted from the image sequences. To solve 3D-WLS optimization problem efficiently, we apply a technique to solve a series of 1D linear sub-problems. Experiments conducted on synthetic and real image sequences demonstrate that the proposed method effectively enhances the image focus volume, ultimately improving the quality of reconstructed shape.

A Comparative Study of Estimation by Analogy using Data Mining Techniques

  • Nagpal, Geeta;Uddin, Moin;Kaur, Arvinder
    • Journal of Information Processing Systems
    • /
    • v.8 no.4
    • /
    • pp.621-652
    • /
    • 2012
  • Software Estimations provide an inclusive set of directives for software project developers, project managers, and the management in order to produce more realistic estimates based on deficient, uncertain, and noisy data. A range of estimation models are being explored in the industry, as well as in academia, for research purposes but choosing the best model is quite intricate. Estimation by Analogy (EbA) is a form of case based reasoning, which uses fuzzy logic, grey system theory or machine-learning techniques, etc. for optimization. This research compares the estimation accuracy of some conventional data mining models with a hybrid model. Different data mining models are under consideration, including linear regression models like the ordinary least square and ridge regression, and nonlinear models like neural networks, support vector machines, and multivariate adaptive regression splines, etc. A precise and comprehensible predictive model based on the integration of GRA and regression has been introduced and compared. Empirical results have shown that regression when used with GRA gives outstanding results; indicating that the methodology has great potential and can be used as a candidate approach for software effort estimation.

Image Denoising for Metal MRI Exploiting Sparsity and Low Rank Priors

  • Choi, Sangcheon;Park, Jun-Sik;Kim, Hahnsung;Park, Jaeseok
    • Investigative Magnetic Resonance Imaging
    • /
    • v.20 no.4
    • /
    • pp.215-223
    • /
    • 2016
  • Purpose: The management of metal-induced field inhomogeneities is one of the major concerns of distortion-free magnetic resonance images near metallic implants. The recently proposed method called "Slice Encoding for Metal Artifact Correction (SEMAC)" is an effective spin echo pulse sequence of magnetic resonance imaging (MRI) near metallic implants. However, as SEMAC uses the noisy resolved data elements, SEMAC images can have a major problem for improving the signal-to-noise ratio (SNR) without compromising the correction of metal artifacts. To address that issue, this paper presents a novel reconstruction technique for providing an improvement of the SNR in SEMAC images without sacrificing the correction of metal artifacts. Materials and Methods: Low-rank approximation in each coil image is first performed to suppress the noise in the slice direction, because the signal is highly correlated between SEMAC-encoded slices. Secondly, SEMAC images are reconstructed by the best linear unbiased estimator (BLUE), also known as Gauss-Markov or weighted least squares. Noise levels and correlation in the receiver channels are considered for the sake of SNR optimization. To this end, since distorted excitation profiles are sparse, $l_1$ minimization performs well in recovering the sparse distorted excitation profiles and the sparse modeling of our approach offers excellent correction of metal-induced distortions. Results: Three images reconstructed using SEMAC, SEMAC with the conventional two-step noise reduction, and the proposed image denoising for metal MRI exploiting sparsity and low rank approximation algorithm were compared. The proposed algorithm outperformed two methods and produced 119% SNR better than SEMAC and 89% SNR better than SEMAC with the conventional two-step noise reduction. Conclusion: We successfully demonstrated that the proposed, novel algorithm for SEMAC, if compared with conventional de-noising methods, substantially improves SNR and reduces artifacts.

Ultra-WideBand Channel Measurement with Compressive Sampling for Indoor Localization (실내 위치추정을 위한 Compressive Sampling적용 Ultra-WideBand 채널 측정기법)

  • Kim, Sujin;Myung, Jungho;Kang, Joonhyuk;Sung, Tae-Kyung;Lee, Kwang-Eog
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.2
    • /
    • pp.285-297
    • /
    • 2015
  • In this paper, Ulta-WideBand (UWB) channel measurement and modeling based on compressive sampling (CS) are proposed. The sparsity of the channel impulse response (CIR) of the UWB signal in frequency domain enables the proposed channel measurement to have a low-complexity and to provide a comparable performance compared with the existing approaches especially used for the indoor geo-localization purpose. Furthermore, to improve the performance under noisy situation, the soft thresholding method is also investigated in solving the optimization problem for signal recovery of CS. Via numerical results, the proposed channel measurement and modeling are evaluated with the real measured data in terms of location estimation error, bandwidth, and compression ratio for indoor geo-localization using UWB system.

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.