• Title/Summary/Keyword: ESTIMATOR 모델

Search Result 152, Processing Time 0.023 seconds

An Estimator Design of Turning Acceleration for Tracking a Maneuvering Target using Curvature (곡률을 이용한 기동표적 추적용 회전가속도 추정기 설계)

  • Joo, Jae-Seok;Park, Je-Hong;Lim, Sang-Seok
    • Journal of Advanced Navigation Technology
    • /
    • v.4 no.2
    • /
    • pp.162-170
    • /
    • 2000
  • Maneuvering targets are difficult for the Kalman filter to track since the target model of tracking filter might not fit the real target trajectory and the statistical characteristics of the target maneuver are unknown in advance. In order to track such a wildly maneuvering target, several schemes had been proposed and improved the tracking performance in some extent. In this paper a Kalman filter-based scheme is proposed for maneuvering target tracking. The proposed scheme estimates the target acceleration input vector directly from the feature of maneuvering target trajectories and updates the simple Kalman tracker by use of the acceleration estimates. Simulation results for various target profiles are analyzed for a comparison of the performances of our proposed scheme with that of conventional trackers.

  • PDF

Call Admission Control in ATM by Neural Networks and Fuzzy Pattern Estimator (신경망과 퍼지 패턴 추정기를 이용한 ATM의 호 수락 제어)

  • Lee, Jin-Lee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2188-2195
    • /
    • 1999
  • This paper proposes a new call admission control scheme utilizing an inverse fuzzy vector quantizer(IFVQ) and neuralnet, which combines benefits of IFVQ and flexibilities of FCM(Fuzzy-C-Means) arithmetics, to decide whether a requested call not to be trained in learning phase to be connected or not. The system generates the estimated traffic pattern for the cell stream of a new call, using feasible/infeasible patterns in codebook, fuzzy membership values that represent the degree to which each pattern of codebook matches input pattern, and FCM arithmetics. The input to the NN is the vector consisted of traffic parameters which are the means and variances of the number of cells arriving in decision as to whether to accept or reject a new call depends on whether the NN is used for decision threshold(+0.5). This method is a new technique for call admission control using the membership values as traffic parameter which declared to CAC at the call set up stage, and this is valid for a very general traffic model in which the calls of a stream can belong to an unlimited number of traffic classes. Through the simulations, it is founded the performance of the suggested method outperforms compared to the conventional NN method.

  • PDF

Estimating the Rumor Source by Rumor Centrality Based Query in Networks (네트워크에서 루머 중심성 기반 질의를 통한 루머의 근원 추정)

  • Choi, Jaeyoung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.7
    • /
    • pp.275-288
    • /
    • 2019
  • In this paper, we consider a rumor source inference problem when sufficiently many nodes heard the rumor in the network. This is an important problem because information spread in networks is fast in many real-world phenomena such as diffusion of a new technology, computer virus/spam infection in the internet, and tweeting and retweeting of popular topics and some of this information is harmful to other nodes. This problem has been much studied, where it has been shown that the detection probability cannot be beyond 31% even for regular trees if the number of infected nodes is sufficiently large. Motivated by this, we study the impact of query that is asking some additional question to the candidate nodes of the source and propose budget assignment algorithms of a query when the network administrator has a finite budget. We perform various simulations for the proposed method and obtain the detection probability that outperforms to the existing prior works.

Comparison Study of Kernel Density Estimation according to Various Bandwidth Selectors (다양한 대역폭 선택법에 따른 커널밀도추정의 비교 연구)

  • Kang, Young-Jin;Noh, Yoojeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.3
    • /
    • pp.173-181
    • /
    • 2019
  • To estimate probabilistic distribution function from experimental data, kernel density estimation(KDE) is mostly used in cases when data is insufficient. The estimated distribution using KDE depends on bandwidth selectors that smoothen or overfit a kernel estimator to experimental data. In this study, various bandwidth selectors such as the Silverman's rule of thumb, rule using adaptive estimates, and oversmoothing rule, were compared for accuracy and conservativeness. For this, statistical simulations were carried out using assumed true models including unimodal and multimodal distributions, and, accuracies and conservativeness of estimating distribution functions were compared according to various data. In addition, it was verified how the estimated distributions using KDE with different bandwidth selectors affect reliability analysis results through simple reliability examples.

Application of Recurrent Neural-Network based Kalman Filter for Uncertain Target Models (불확정 표적 모델에 대한 순환 신경망 기반 칼만 필터 설계)

  • DongBeom Kim;Daekyo Jeong;Jaehyuk Lim;Sawon Min;Jun Moon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.1
    • /
    • pp.10-21
    • /
    • 2023
  • For various target tracking applications, it is well known that the Kalman filter is the optimal estimator(in the minimum mean-square sense) to predict and estimate the state(position and/or velocity) of linear dynamical systems driven by Gaussian stochastic noise. In the case of nonlinear systems, Extended Kalman filter(EKF) and/or Unscented Kalman filter(UKF) are widely used, which can be viewed as approximations of the(linear) Kalman filter in the sense of the conditional expectation. However, to implement EKF and UKF, the exact dynamical model information and the statistical information of noise are still required. In this paper, we propose the recurrent neural-network based Kalman filter, where its Kalman gain is obtained via the proposed GRU-LSTM based neural-network framework that does not need the precise model information as well as the noise covariance information. By the proposed neural-network based Kalman filter, the state estimation performance is enhanced in terms of the tracking error, which is verified through various linear and nonlinear tracking problems with incomplete model and statistical covariance information.

Recipe-based estimation system for 5D(3D + Cost + Schedule) CAD system (레서피(Recipe) 기반의 견적 방법을 이용한 5D CAD 시스템)

  • Choi, Cheol-Ho;Park, Young-Jin;Han, Sung-Hun;Chin, Sang-Yoon
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • 2006.11a
    • /
    • pp.154-160
    • /
    • 2006
  • There wasn't very successful practice cases for the quantity take-off based on the CAD system since the CAD system is used in the construction industry more than 20 years in Korea. It was also not easy to use 3D CAD system in design management and cost management in the construction industry although 3D CAD system is very successful in the manufacturing industry for those areas recently. It is important to construct 3D libraries and to supply those libraries for the designers in time. Architectural work is a kind of creative work. So, Architects like to create their own model. Unlike the manufacturing industry, 3D CAD system can not be survived in the construction industry without new 3D objects supply in the right time. Moreover, the estimation system for 3D must support the schematic design phase, detailed design phase and construction design phase. The product called "Constructor" of Graphisoft consist of modeller, estimator and scheduler based on 3D model. We applied the system to a real project and compared the estimation result and we made a very successful case study.

  • PDF

Long-term (2002~2017) Eutropication Characteristics, Empirical Model Analysis in Hapcheon Reservoir, and the Spatio-temporal Variabilities Depending on the Intensity of the Monsoon (합천호의 장기간 (2002~2017) 부영양화 특성, 경험적 모델 분석 및 몬순강도에 따른 시공간적 이화학적 수질 변이)

  • Kang, Yu-Jin;Lee, Sang- Jae;An, Kwang-Guk
    • Korean Journal of Environment and Ecology
    • /
    • v.33 no.5
    • /
    • pp.605-619
    • /
    • 2019
  • The objective of this study was to analyze eutrophication characteristics, empirical model analysis, and variation of water quality according to monsoon intensity in Hapcheon Reservoir for 16 years from 2002 to 2017. Long-term annual water quality analysis showed that Hapcheon Reservoir was in a meso-nutrition to eutrophic condition, and the eutrophic state intensified after the summer monsoon. Annual rainfall volume (high vs. low rainfall) and the seasonal intensity in each year were the key factors that regulate the long-term water quality variation provided that there is no significant change of the point- and non-point source in the watershed. Dry years and wet years showed significant differences in the concentrations of TP, TN, BOD, and conductivity, indicating that precipitation had the most direct influence on nutrients and organic matter dynamics. Nutrient indicators (TP, TN), organic pollution indicators (BOD, COD), total suspended solids, and chlorophyll-a (Chl-a), which was an estimator of primary productivity, had significant positive relations (p<0.05) with precipitation. The Chl-a concentration, which is an indicator of green algae, was highly correlated with TP, TN, and BOD, which differed from other lakes that showed the lower Chl-a concentration when nutrients increased excessively. Empirical model analysis of log-transformed TN, TP, and Chl-a indicated that the Chl-a concentration was linearly regulated by phosphorus concentration, but not by nitrogen concentration. Spatial regression analysis of the riverine, transition, and lacustrine zones of $log_{10}TN$, $log_{10}TP$, and $log_{10}CHL$ showed that TN and Chl-a had significant relations (p<0.005) while TN and Chl-a had p > 0.05, indicating that phosphorus had a key role in the algal growth. Moreover, the higher correlation of both $log_{10}TP$ and $log_{10}TN$ to $log_{10}CHL$ in the riverine zone than the lacustrine zone indicated that there was little impact of inorganic suspended solids on the light limitation in the riverine zone.

Automatic velocity analysis using bootstrapped differential semblance and global search methods (고해상도 속도스펙트럼과 전역탐색법을 이용한 자동속도분석)

  • Choi, Hyung-Wook;Byun, Joong-Moo;Seol, Soon-Jee
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.1
    • /
    • pp.31-39
    • /
    • 2010
  • The goal of automatic velocity analysis is to extract accurate velocity from voluminous seismic data with efficiency. In this study, we developed an efficient automatic velocity analysis algorithm by using bootstrapped differential semblance (BDS) and Monte Carlo inversion. To estimate more accurate results from automatic velocity analysis, the algorithm we have developed uses BDS, which provides a higher velocity resolution than conventional semblance, as a coherency estimator. In addition, our proposed automatic velocity analysis module is performed with a conditional initial velocity determination step that leads to enhanced efficiency in running time of the module. A new optional root mean square (RMS) velocity constraint, which prevents picking false peaks, is used. The developed automatic velocity analysis module was tested on a synthetic dataset and a marine field dataset from the East Sea, Korea. The stacked sections made using velocity results from our algorithm showed coherent events and improved the quality of the normal moveout-correction result. Moreover, since our algorithm finds interval velocity ($\nu_{int}$) first with interval velocity constraints and then calculates a RMS velocity function from the interval velocity, we can estimate geologically reasonable interval velocities. Boundaries of interval velocities also match well with reflection events in the common midpoint stacked sections.

A Brake Pad Wear Compensation Method and Performance Evaluation for ElectroMechanical Brake (전기기계식 제동장치의 제동패드 마모보상방법 및 성능평가)

  • Baek, Seung-Koo;Oh, Hyuck-Keun;Park, Choon-Soo;Kim, Seog-Won
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.581-588
    • /
    • 2020
  • This study examined a brake pad wear compensation method for an Electro-Mechanical Brake (EMB) using the braking test device. A three-phase Interior Permanent Magnet Synchronous Motor (IPMSM) was applied to drive the actuator of an EMB. Current control, speed control, and position control were used to control the clamping force of the EMB. The wear compensation method was performed using a software algorithm that updates the motor model equation by comparing the motor output torque current with a reference current. In addition, a simple first-order motor model equation was applied to estimate the output clamping force. The operation time to the maximum clamping force increased within 0.1 seconds compared to the brake pad in its initial condition. The experiment verified that the reference operating time was within 0.5 seconds, and the maximum value of the clamping force was satisfied under the wear condition. The wear compensation method based on the software algorithm in this paper can be performed in the pre-departure test of rolling stock.

The Comparative Study of NHPP Software Reliability Model Based on Log and Exponential Power Intensity Function (로그 및 지수파우어 강도함수를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.445-452
    • /
    • 2015
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log and power intensity function (log linear, log power and exponential power), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure, using real data set for the sake of proposing log and power intensity function, was employed. This analysis of failure data compared with log and power intensity function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.