• Title/Summary/Keyword: error-state approach

Search Result 211, Processing Time 0.03 seconds

Using Non-Local Features to Improve Named Entity Recognition Recall

  • Mao, Xinnian;Xu, Wei;Dong, Yuan;He, Saike;Wang, Haila
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.303-310
    • /
    • 2007
  • Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.

  • PDF

Optimal PID Controller Design for DC Motor Speed Control System with Tracking and Regulating Constrained Optimization via Cuckoo Search

  • Puangdownreong, Deacha
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.460-467
    • /
    • 2018
  • Metaheuristic optimization approach has become the new framework for control synthesis. The main purposes of the control design are command (input) tracking and load (disturbance) regulating. This article proposes an optimal proportional-integral-derivative (PID) controller design for the DC motor speed control system with tracking and regulating constrained optimization by using the cuckoo search (CS), one of the most efficient population-based metaheuristic optimization techniques. The sum-squared error between the referent input and the controlled output is set as the objective function to be minimized. The rise time, the maximum overshoot, settling time and steady-state error are set as inequality constraints for tracking purpose, while the regulating time and the maximum overshoot of load regulation are set as inequality constraints for regulating purpose. Results obtained by the CS will be compared with those obtained by the conventional design method named Ziegler-Nichols (Z-N) tuning rules. From simulation results, it was found that the Z-N provides an impractical PID controller with very high gains, whereas the CS gives an optimal PID controller for DC motor speed control system satisfying the preset tracking and regulating constraints. In addition, the simulation results are confirmed by the experimental ones from the DC motor speed control system developed by analog technology.

NOVEL GEOMETRIC PARAMETERIZATION SCHEME FOR THE CERTIFIED REDUCED BASIS ANALYSIS OF A SQUARE UNIT CELL

  • LE, SON HAI;KANG, SHINSEONG;PHAM, TRIET MINH;LEE, KYUNGHOON
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.196-220
    • /
    • 2021
  • This study formulates a new geometric parameterization scheme to effectively address numerical analysis subject to the variation of the fiber radius of a square unit cell. In particular, the proposed mesh-morphing approach may lead to a parameterized weak form whose bilinear and linear forms are affine in the geometric parameter of interest, i.e. the fiber radius. As a result, we may certify the reduced basis analysis of a square unit cell model for any parameters in a predetermined parameter domain with a rigorous a posteriori error bound. To demonstrate the utility of the proposed geometric parameterization, we consider a two-dimensional, steady-state heat conduction analysis dependent on two parameters: a fiber radius and a thermal conductivity. For rapid yet rigorous a posteriori error evaluation, we estimate a lower bound of a coercivity constant via the min-θ method as well as the successive constraint method. Compared to the corresponding finite element analysis, the constructed reduced basis analysis may yield nearly the same solution at a computational speed about 29 times faster on average. In conclusion, the proposed geometric parameterization scheme is conducive for accurate yet efficient reduced basis analysis.

Henry gas solubility optimization for control of a nuclear reactor: A case study

  • Mousakazemi, Seyed Mohammad Hossein
    • Nuclear Engineering and Technology
    • /
    • v.54 no.3
    • /
    • pp.940-947
    • /
    • 2022
  • Meta-heuristic algorithms have found their place in optimization problems. Henry gas solubility optimization (HGSO) is one of the newest population-based algorithms. This algorithm is inspired by Henry's law of physics. To evaluate the performance of a new algorithm, it must be used in various problems. On the other hand, the optimization of the proportional-integral-derivative (PID) gains for load-following of a nuclear power plant (NPP) is a good challenge to assess the performance of HGSO. Accordingly, the power control of a pressurized water reactor (PWR) is targeted, based on the point kinetics model with six groups of delayed-neutron precursors. In any optimization problem based on meta-heuristic algorithms, an efficient objective function is required. Therefore, the integral of the time-weighted square error (ITSE) performance index is utilized as the objective (cost) function of HGSO, which is constrained by a stability criterion in steady-state operations. A Lyapunov approach guarantees this stability. The results show that this method provides superior results compared to an empirically tuned PID controller with the least error. It also achieves good accuracy compared to an established GA-tuned PID controller.

Matching Performance Analysis of Upsampled Satellite Image and GCP Chip for Establishing Automatic Precision Sensor Orientation for High-Resolution Satellite Images

  • Hyeon-Gyeong Choi;Sung-Joo Yoon;Sunghyeon Kim;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.103-114
    • /
    • 2024
  • The escalating demands for high-resolution satellite imagery necessitate the dissemination of geospatial data with superior accuracy.Achieving precise positioning is imperative for mitigating geometric distortions inherent in high-resolution satellite imagery. However, maintaining sub-pixel level accuracy poses significant challenges within the current technological landscape. This research introduces an approach wherein upsampling is employed on both the satellite image and ground control points (GCPs) chip, facilitating the establishment of a high-resolution satellite image precision sensor orientation. The ensuing analysis entails a comprehensive comparison of matching performance. To evaluate the proposed methodology, the Compact Advanced Satellite 500-1 (CAS500-1), boasting a resolution of 0.5 m, serves as the high-resolution satellite image. Correspondingly, GCP chips with resolutions of 0.25 m and 0.5 m are utilized for the South Korean and North Korean regions, respectively. Results from the experiment reveal that concurrent upsampling of satellite imagery and GCP chips enhances matching performance by up to 50% in comparison to the original resolution. Furthermore, the position error only improved with 2x upsampling. However,with 3x upsampling, the position error tended to increase. This study affirms that meticulous upsampling of high-resolution satellite imagery and GCP chips can yield sub-pixel-level positioning accuracy, thereby advancing the state-of-the-art in the field.

Accelerated Monte Carlo analysis of flow-based system reliability through artificial neural network-based surrogate models

  • Yoon, Sungsik;Lee, Young-Joo;Jung, Hyung-Jo
    • Smart Structures and Systems
    • /
    • v.26 no.2
    • /
    • pp.175-184
    • /
    • 2020
  • Conventional Monte Carlo simulation-based methods for seismic risk assessment of water networks often require excessive computational time costs due to the hydraulic analysis. In this study, an Artificial Neural Network-based surrogate model was proposed to efficiently evaluate the flow-based system reliability of water distribution networks. The surrogate model was constructed with appropriate training parameters through trial-and-error procedures. Furthermore, a deep neural network with hidden layers and neurons was composed for the high-dimensional network. For network training, the input of the neural network was defined as the damage states of the k-dimensional network facilities, and the output was defined as the network system performance. To generate training data, random sampling was performed between earthquake magnitudes of 5.0 and 7.5, and hydraulic analyses were conducted to evaluate network performance. For a hydraulic simulation, EPANET-based MATLAB code was developed, and a pressure-driven analysis approach was adopted to represent an unsteady-state network. To demonstrate the constructed surrogate model, the actual water distribution network of A-city, South Korea, was adopted, and the network map was reconstructed from the geographic information system data. The surrogate model was able to predict network performance within a 3% relative error at trained epicenters in drastically reduced time. In addition, the accuracy of the surrogate model was estimated to within 3% relative error (5% for network performance lower than 0.2) at different epicenters to verify the robustness of the epicenter location. Therefore, it is concluded that ANN-based surrogate model can be utilized as an alternative model for efficient seismic risk assessment to within 5% of relative error.

One Step Measurements of hippocampal Pure Volumes from MRI Data Using an Ensemble Model of 3-D Convolutional Neural Network

  • Basher, Abol;Ahmed, Samsuddin;Jung, Ho Yub
    • Smart Media Journal
    • /
    • v.9 no.2
    • /
    • pp.22-32
    • /
    • 2020
  • The hippocampal volume atrophy is known to be linked with neuro-degenerative disorders and it is also one of the most important early biomarkers for Alzheimer's disease detection. The measurements of hippocampal pure volumes from Magnetic Resonance Imaging (MRI) is a crucial task and state-of-the-art methods require a large amount of time. In addition, the structural brain development is investigated using MRI data, where brain morphometry (e.g. cortical thickness, volume, surface area etc.) study is one of the significant parts of the analysis. In this study, we have proposed a patch-based ensemble model of 3-D convolutional neural network (CNN) to measure the hippocampal pure volume from MRI data. The 3-D patches were extracted from the volumetric MRI scans to train the proposed 3-D CNN models. The trained models are used to construct the ensemble 3-D CNN model and the aggregated model predicts the pure volume in one-step in the test phase. Our approach takes only 5 seconds to estimate the volumes from an MRI scan. The average errors for the proposed ensemble 3-D CNN model are 11.7±8.8 (error%±STD) and 12.5±12.8 (error%±STD) for the left and right hippocampi of 65 test MRI scans, respectively. The quantitative study on the predicted volumes over the ground truth volumes shows that the proposed approach can be used as a proxy.

Low complexity hybrid layered tabu-likelihood ascent search for large MIMO detection with perfect and estimated channel state information

  • Sourav Chakraborty;Nirmalendu Bikas Sinha;Monojit Mitra
    • ETRI Journal
    • /
    • v.45 no.3
    • /
    • pp.418-432
    • /
    • 2023
  • In this work, we proposed a low-complexity hybrid layered tabu-likelihood ascent search (LTLAS) algorithm for large multiple-input multiple-output (MIMO) system. The conventional layered tabu search (LTS) approach involves many partial reactive tabu searches (RTSs), and each RTS requires an initialization and searching phase. In the proposed algorithm, we restricted the upper limit of the number of RTS operations. Once RTS operations exceed the limit, RTS will be replaced by low-complexity likelihood ascent search (LAS) operations. The block-based detection approach is considered to maintain a higher signal-to-noise ratio (SNR) detection performance. An efficient precomputation technique is derived, which can suppress redundant computations. The simulation results show that the bit error rate (BER) performance of the proposed detection method is close to the conventional LTS method. The complexity analysis shows that the proposed method has significantly lower computational complexity than conventional methods. Also, the proposed method can reduce almost 50% of real operations to achieve a BER of 10-3.

Performance Degradation Due to Particle Impoverishment in Particle Filtering

  • Lim, Jaechan
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.2107-2113
    • /
    • 2014
  • Particle filtering (PF) has shown its outperforming results compared to that of classical Kalman filtering (KF), particularly for highly nonlinear problems. However, PF may not be universally superior to the extended KF (EKF) although the case (i.e. an example that the EKF outperforms PF) is seldom reported in the literature. Particularly, PF approaches show degraded performance for problems where the state noise is very small or zero. This is because particles become identical within a few iterations, which is so called particle impoverishment (PI) phenomenon; consequently, no matter how many particles are employed, we do not have particle diversity regardless of if the impoverished particle is close to the true state value or not. In this paper, we investigate this PI phenomenon, and show an example problem where a classical KF approach outperforms PF approaches in terms of mean squared error (MSE) criterion. Furthermore, we compare the processing speed of the EKF and PF approaches, and show the better speed performance of classical EKF approaches. Therefore, PF approaches may not be always better option than the classical EKF for nonlinear problems. Specifically, we show the outperforming result of unscented Kalman filter compared to that of PF approaches (which are shown in Fig. 7(c) for processing speed performance, and Fig. 6 for MSE performance in the paper).

Practical Pinch Torque Detection Algorithm for Anti-Pinch Window Control System Application

  • Lee, Hye-Jin;Ra, Won-Sang;Yoon, Tae-Sung;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2526-2531
    • /
    • 2005
  • A practical pinch torque estimator based on the Kalman filter is proposed for low-cost anti-pinch window control systems. To obtain the accurate angular velocity from Hall-effect sensor measurements, the angular velocity calculation algorithm is executed with additional procedures for removing the measurement noises. Apart from the previous works using the angular velocity estimates and torque estimates for detecting the pinched condition, the torque rate is augmented to the system model and the proposed pinch estimator is derived by applying the steady-state Kalman filter recursion to the model. The motivation of this approach comes from the idea that the bias errors in torque estimates due to the motor parameter uncertainties can be almost eliminated by introducing the torque rate state. For detecting the pinched condition, a systematic way to determine the threshold level of the torque rate estimates is also suggested via the deterministic estimation error analysis. Simulation results are given to certify the pinch detection performance of the proposed algorithm and its robustness against the motor parameter uncertainties.

  • PDF