• Title/Summary/Keyword: False Errors

Search Result 123, Processing Time 0.023 seconds

Are theoretically calculated periods of vibration for skeletal structures error-free?

  • Mehanny, Sameh S.F.
    • Earthquakes and Structures
    • /
    • v.3 no.1
    • /
    • pp.17-35
    • /
    • 2012
  • Simplified equations for fundamental period of vibration of skeletal structures provided by most seismic design provisions suffer from the absence of any associated confidence levels and of any reference to their empirical basis. Therefore, such equations may typically give a sector of designers the false impression of yielding a fairly accurate value of the period of vibration. This paper, although not addressing simplified codes equations, introduces a set of mathematical equations utilizing the theory of error propagation and First-Order Second-Moment (FOSM) techniques to determine bounds on the relative error in theoretically calculated fundamental period of vibration of skeletal structures. In a complementary step, and for verification purposes, Monte Carlo simulation technique has been also applied. The latter, despite involving larger computational effort, is expected to provide more precise estimates than FOSM methods. Studies of parametric uncertainties applied to reinforced concrete frame bents - potentially idealized as SDOF systems - are conducted demonstrating the effect of randomness and uncertainty of various relevant properties, shaping both mass and stiffness, on the variance (i.e. relative error) in the estimated period of vibration. Correlation between mass and stiffness parameters - regarded as random variables - is also thoroughly discussed. According to achieved results, a relative error in the period of vibration in the order of 19% for new designs/constructions and of about 25% for existing structures for assessment purposes - and even climbing up to about 36% in some special applications and/or circumstances - is acknowledged when adopting estimates gathered from the literature for relative errors in the relevant random input variables.

Performance Improvement of TRN Batch Processing Using the Slope Profile (기울기 프로파일을 이용한 일괄처리 방식 지형참조항법의 성능 개선)

  • Lee, Sun-Min;Yoo, Young-Min;Lee, Won-Hee;Lee, Dal-Ho;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.4
    • /
    • pp.384-390
    • /
    • 2012
  • In this paper, we analyzed the navigation error of TERCOM (TErrain COntour Matching), which is TRN (Terrain Referenced Navigation) batch processing, caused by scale factor error of radar altimeter and proved the possibility of false position fix when we use the TERCOM's feature matching algorithm. Based on these, we proposed the new TRN batch processing algorithm using the slope measurements of terrain. The proposed technique measures on periodic changes in the slope of the terrain elevation profile, and these measurements are used in the feature matching algorithm. By using the slope of terrain data, the impact of scale factor errors can be compensated. By simulation, we verified improved outcome using this approach compared to the result using the conventional method.

Fire Detection using Color and Motion Models

  • Lee, Dae-Hyun;Lee, Sang Hwa;Byun, Taeuk;Cho, Nam Ik
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.237-245
    • /
    • 2017
  • This paper presents a fire detection algorithm using color and motion models from video sequences. The proposed method detects change in color and motion of overall regions for detecting fire, and thus, it can be implemented in both fixed and pan/tilt/zoom (PTZ) cameras. The proposed algorithm consists of three parts. The first part exploits color models of flames and smoke. The candidate regions in the video frames are extracted with the hue-saturation-value (HSV) color model. The second part models the motion information of flames and smoke. Optical flow in the fire candidate region is estimated, and the spatial-temporal distribution of optical flow vectors is analyzed. The final part accumulates the probability of fire in successive video frames, which reduces false-positive errors when fire-like color objects appear. Experimental results from 100 fire videos are shown, where various types of smoke and flames appear in indoor and outdoor environments. According to the experiments and the comparison, the proposed fire detection algorithm works well in various situations, and outperforms the conventional algorithms.

Adaptive Algorithms for Bayesian Spectrum Sensing Based on Markov Model

  • Peng, Shengliang;Gao, Renyang;Zheng, Weibin;Lei, Kejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3095-3111
    • /
    • 2018
  • Spectrum sensing (SS) is one of the fundamental tasks for cognitive radio. In SS, decisions can be made via comparing the test statistics with a threshold. Conventional adaptive algorithms for SS usually adjust their thresholds according to the radio environment. This paper concentrates on the issue of adaptive SS whose threshold is adjusted based on the Markovian behavior of primary user (PU). Moreover, Bayesian cost is adopted as the performance metric to achieve a trade-off between false alarm and missed detection probabilities. Two novel adaptive algorithms, including Markov Bayesian energy detection (MBED) algorithm and IMBED (improved MBED) algorithm, are proposed. Both algorithms model the behavior of PU as a two-state Markov process, with which their thresholds are adaptively adjusted according to the detection results at previous slots. Compared with the existing Bayesian energy detection (BED) algorithm, MBED algorithm can achieve lower Bayesian cost, especially in high signal-to-noise ratio (SNR) regime. Furthermore, it has the advantage of low computational complexity. IMBED algorithm is proposed to alleviate the side effects of detection errors at previous slots. It can reduce Bayesian cost more significantly and in a wider SNR region. Simulation results are provided to illustrate the effectiveness and efficiencies of both algorithms.

Neuronal Spike Train Decoding Methods for the Brain-Machine Interface Using Nonlinear Mapping (비선형매핑 기반 뇌-기계 인터페이스를 위한 신경신호 spike train 디코딩 방법)

  • Kim, Kyunn-Hwan;Kim, Sung-Shin;Kim, Sung-June
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.7
    • /
    • pp.468-474
    • /
    • 2005
  • Brain-machine interface (BMI) based on neuronal spike trains is regarded as one of the most promising means to restore basic body functions of severely paralyzed patients. The spike train decoding algorithm, which extracts underlying information of neuronal signals, is essential for the BMI. Previous studies report that a linear filter is effective for this purpose and there is no noteworthy gain from the use of nonlinear mapping algorithms, in spite of the fact that neuronal encoding process is obviously nonlinear. We designed several decoding algorithms based on the linear filter, and two nonlinear mapping algorithms using multilayer perceptron (MLP) and support vector machine regression (SVR), and show that the nonlinear algorithms are superior in general. The MLP often showed unsatisfactory performance especially when it is carelessly trained. The nonlinear SVR showed the highest performance. This may be due to the superiority of the SVR in training and generalization. The advantage of using nonlinear algorithms were more profound for the cases when there are false-positive/negative errors in spike trains.

Shot Transition Detection by Compensating Camera Operations (카메라의 동작을 보정한 장면전환 검출)

  • Jang Seok-Woo;Choi Hyung-Il
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.403-412
    • /
    • 2005
  • In this paper, we propose an effective method for detecting and classifying shot transitions in video sequences. The proposed method detects and classifies shot transitions including cuts, fades and dissolves by compensating camera operations in video sequences, so that our method prevents false positives resulting from camera operations. Also, our method eliminates local moving objects in the process of compensating camera operations, so that our method prevents errors resulting from moving objects. In the experiments, we show that our shot transition approach can work as a promising solution by comparing the proposed method with previously known methods in terms of performance.

Quantification of predicted uncertainty for a data-based model

  • Chai, Jangbom;Kim, Taeyun
    • Nuclear Engineering and Technology
    • /
    • v.53 no.3
    • /
    • pp.860-865
    • /
    • 2021
  • A data-based model, such as an AAKR model is widely used for monitoring the drifts of sensors in nuclear power plants. However, since a training dataset and a test dataset for a data-based model cannot be constructed with the data from all the possible states, the model uncertainty cannot be good enough to represent the uncertainty of estimations. In fact, the errors of estimation grow much bigger if the incoming data come from inexperienced states. To overcome this limitation of the model uncertainty, a new measure of uncertainty for a data-based model is developed and the predicted uncertainty is introduced. The predicted uncertainty is defined in every estimation according to the incoming data. In this paper, the AAKR model is used as a data-based model. The predicted uncertainty is similar in magnitude to the model uncertainty when the estimation is made for the incoming data from the experienced states but it goes bigger otherwise. The characteristics of the predicted model uncertainty are studied and the usefulness is demonstrated with the pressure signals measured in the flow-loop system. It is expected that the predicted uncertainty can quite reduce the false alarm by using the variable threshold instead of the fixed threshold.

Exploration of errors in variance caused by using the first-order approximation in Mendelian randomization

  • Kim, Hakin;Kim, Kunhee;Han, Buhm
    • Genomics & Informatics
    • /
    • v.20 no.1
    • /
    • pp.9.1-9.6
    • /
    • 2022
  • Mendelian randomization (MR) uses genetic variation as a natural experiment to investigate the causal effects of modifiable risk factors (exposures) on outcomes. Two-sample Mendelian randomization (2SMR) is widely used to measure causal effects between exposures and outcomes via genome-wide association studies. 2SMR can increase statistical power by utilizing summary statistics from large consortia such as the UK Biobank. However, the first-order term approximation of standard error is commonly used when applying 2SMR. This approximation can underestimate the variance of causal effects in MR, which can lead to an increased false-positive rate. An alternative is to use the second-order approximation of the standard error, which can considerably correct for the deviation of the first-order approximation. In this study, we simulated MR to show the degree to which the first-order approximation underestimates the variance. We show that depending on the specific situation, the first-order approximation can underestimate the variance almost by half when compared to the true variance, whereas the second-order approximation is robust and accurate.

A Hybrid Soft Computing Technique for Software Fault Prediction based on Optimal Feature Extraction and Classification

  • Balaram, A.;Vasundra, S.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.348-358
    • /
    • 2022
  • Software fault prediction is a method to compute fault in the software sections using software properties which helps to evaluate the quality of software in terms of cost and effort. Recently, several software fault detection techniques have been proposed to classifying faulty or non-faulty. However, for such a person, and most studies have shown the power of predictive errors in their own databases, the performance of the software is not consistent. In this paper, we propose a hybrid soft computing technique for SFP based on optimal feature extraction and classification (HST-SFP). First, we introduce the bat induced butterfly optimization (BBO) algorithm for optimal feature selection among multiple features which compute the most optimal features and remove unnecessary features. Second, we develop a layered recurrent neural network (L-RNN) based classifier for predict the software faults based on their features which enhance the detection accuracy. Finally, the proposed HST-SFP technique has the more effectiveness in some sophisticated technical terms that outperform databases of probability of detection, accuracy, probability of false alarms, precision, ROC, F measure and AUC.

Noninvasive Evaluation of Coronary Artery Bypass Graft Patency by Electron Beam Tomography (전자선 단층 촬영을 이용한 관상동맥 우회로 개존의 비침습적 평가)

  • 최규옥;김호석;조범구
    • Journal of Chest Surgery
    • /
    • v.32 no.8
    • /
    • pp.693-701
    • /
    • 1999
  • Recently non-invasive diagnostic imaging replaced the invasive catheter angiography in the diagnosis of vascular disease. Catheter methods are now almost confined to the purpose of intervention. Coronary artery or coronary artery bypass graft still needs catheter technique because of small diameter and the cardiac motion. The last challenge for radiologists in this domain is to obtain a non-invasive imaging. Electron beam tomography(EBT) for high temporal resolution is able to obtain a coronary arteriogram or coronary artery bypass graft (CABG), of which CABG imaging is quite useful for the evaluation of patency. In our experience as well as others, the accuracy of EBT angiogram in evaluating CABG patency revealed that the accuracy of patency of saphenous vein grafts(SVG) is high due to relatively wide lumen, short and straight course and less influence from cardiac motion. The sensitivity and specificity of patency of SVGs were 92%, 97% respectively in the prospective evaluat on and 100% each in the retrospective evaluation. A false positive and a false negative case are rudimentary errors in the initial learing period. In contrast the analysis of left internal mammary artery(LIMA) graft was difficult due to the inherent small size and the adjacent surgical clips provoking beam-hardening artifact; therefore, the method of combining 3 dimensional reconstruction and flow mode study was important in improving the accuracy of LIMA patency. The sensitivity and specificity of LIMA patency were 100% and 80% in both prospective and retrospective evaluation. Therefore, EBT angiography is an accurate non-invasive diagnostic modality for evaluating the patency of CABG, particularly in SVGs. The accuracy can be improved with the improvement of the EBT and the development of the image reconstruction software.

  • PDF