• Title/Summary/Keyword: error estimate

Search Result 2,291, Processing Time 0.038 seconds

Development of Correlation FXLMS Algorithm for the Performance Improvement in the Active Noise Control of Automotive Intake System under Rapid Acceleration (급가속시 자동차 흡기계의 능동소음제어 성능향상을 위한 Correlation FXLMS 알고리듬 개발)

  • Lee, Kyeong-Tae;Shim, Hyoun-Jin;Aminudin, Bin Abu;Lee, Jung-Yoon;Oh, Jae-Eung
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.11a
    • /
    • pp.551-554
    • /
    • 2005
  • The method of the reduction of the automotive induction noise can be classified by the method of passive control and the method of active control. However, the passive control method has a demerit to reduce the effect of noise reduction at low frequency (below 500Hz) range and to be limited by a space of the engine room. Whereas, the active control method can overcome the demerit of passive control method. The algorithm of active control is mostly used the LMS (Least-Mean-Square) algorithm because the LMS algorithm can easily obtain the complex transfer function in real-time. Especially, When the Filtered-X LMS (FXLMS) algorithm is applied to an ANC system. However, the convergence performance of LMS algorithm goes bad when the FXLMS algorithm is applied to an active control of the induction noise under rapidly accelerated driving conditions. Thus Normalized FXLMS algorithm was developed to improve the control performance under the rapid acceleration. The advantage of Normalized FXLMS algorithm is that the step size is no longer constant. Instead, it varies with time. But there is one additional practical difficulty that can arise when a nonstationary input is used. If the input is zero for consecutive samples, then the step size becomes unbounded. So, in order to solve this problem. the Correlation FXLMS algorithm was developed. The Correlation FXLMS algorithm is realized by using an estimate of the cross correlation between the adaptation error and the filtered input signal to control the step size. In this paper, the performance of the Correlation FXLMS Is presented in comparison with that of the other FXLMS algorithms based on computer simulations.

  • PDF

Analytical Models and their Performance Analysis of Superscalar Processors (수퍼스칼라 프로세서의 해석적 모델 및 성능 분석)

  • Kim, Hak-Jun;Kim, Seon-Mo;Choe, Sang-Bang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.7
    • /
    • pp.847-862
    • /
    • 1999
  • 본 논문에서는 유한버퍼의(finite-buffered) 동기화된(synchronous) 큐잉모델(queueing model)을 이용하여 명령어들간의 병렬성, 분기명령의 빈도수, 분기예측(branch prediction)의 정확도, 캐쉬미스 등의 파라미터들을 고려하여 프로세서의 명령어 실행율을 예측하며 캐쉬의 성능과 파이프라인 성능간의 관계를 분석할 수 있는 새로운 해석적 모델을 제안하였다. 해석적 모델은 모델의 타당성을 검증하기 위해서 시뮬레이션을 수행하여 얻은 결과와 비교하였다. 해석적 모델과 시뮬레이션을 비교한 결과 대부분 10% 오차 내에서 일치하였다. 본 연구를 통하여 얻은 해석적 모델을 사용하면 시뮬레이션에서는 드러나지 않는 성능제약의 원인에 대한 명확한 규명이 가능하기 때문에 성능향상을 위한 설계자료를 얻을 수 있으며, 시스템 성능 밸런스를 위한 캐쉬와 비순차이슈 파이프라인 성능간의 관계에 대한 정확한 분석이 가능하다.Abstract This research presents a novel analytic model to predict the instruction execution rate of superscalar processors using the queuing model with finite-buffer size and synchronous operation mode. The proposed model is also able to analyze the performance relationship between cache and pipeline. The proposed model takes into account various kinds of architectural parameters such as instruction-level parallelism, branch probability, the accuracy of branch prediction, cache miss, and etc.. To prove the correctness of the model, we performed extensive simulations and compared the results with the analytic model. Simulation results showed that the proposed model can estimate the average execution rate accurately within 10% error compared to simulation results. The proposed model can explain the causes of performance bottleneck which cannot be uncovered by the simulation method only. The model is also able to show the effect of the cache miss on the performance of out-of-order issue superscalar processors, which can provide an valuable information in designing a balanced system.

Estimation of Uncertain Moving Object Location Data

  • Ahn Yoon-Ae;Lee Do-Yeol;Hwang Ho-Young
    • Journal of the Korea Computer Industry Society
    • /
    • v.6 no.3
    • /
    • pp.495-508
    • /
    • 2005
  • Moving objects are spatiotemporal data that change their location or shape continuously over time. Their location coordinates are periodically measured and stored i3l the application systems. The linear function is mainly used to estimate the location information that is not in the system at the query time point. However, a new method is needed to improve uncertainties of the location representation, because the location estimation by linear function induces the estimation error. This paper proposes an application method of the cubic spline interpolation in order to reduce deviation of the location estimation by linear function. First, we define location information of the moving object on the two-dimensional space. Next, we apply the cubic spline interpolation to location estimation of the proposed data model and describe algorithm of the estimation operation. Finally, the precision of this estimation operation model is experimented. The experimentation comes out more accurate results than the method by linear function.

  • PDF

Wavelet based Image Reconstruction specific to Noisy X-ray Projections (잡음이 있는 X선 프로젝션에 적합한 웨이블렛 기반 영상재구성)

  • Lee, Nam-Yong;Moon, Jong-Ik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.4
    • /
    • pp.169-177
    • /
    • 2006
  • In this paper, we present an efficient image reconstruction method which is suited to remove various noise generated from measurement using X-ray attenuation. To be specific, we present a wavelet method to efficiently remove ring artifacts, which are caused by inevitable mechanical error in X-ray emitters and detectors. and streak artifacts, which are caused by general observation errors and Fourier transform-based reconstruction process. To remove ring artifacts related noise from projections, we suggest to estimate the noise intensity by using the fact that the noise related to ring artifacts has a strong correlation in the angle direction, and remove them by using wavelet shrinkage. We also suggest to use wavelet-vaguelette decomposition for general-purpose noise removal and image reconstruction. Through simulation studies. we show that the proposed method provides a better result in ring artifact removal and image reconstruction over the traditional Fourier transform-based methods.

  • PDF

Phonetic Transcription based Speech Recognition using Stochastic Matching Method (확률적 매칭 방법을 사용한 음소열 기반 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.696-700
    • /
    • 2007
  • A new method that improves the performance of the phonetic transcription based speech recognition system is presented with the speaker-independent phonetic recognizer. Since SI phoneme HMM based speech recognition system uses only the phoneme transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the speaker dependent system due to the phoneme recognition errors generated from using SI models. A new training method that iteratively estimates the phonetic transcription and transformation vectors is presented to reduce the mismatch between the training utterances and a set of SI models using speaker adaptation techniques. For speaker adaptation the stochastic matching methods are used to estimate the transformation vectors. The experiments performed over actual telephone line shows that a reduction of about 45% in the error rates could be achieved as compared to the conventional method.

Measurement of Travel Time Using Sequence Pattern of Vehicles (차종 시퀀스 패턴을 이용한 구간통행시간 계측)

  • Lim, Joong-Seon;Choi, Gyung-Hyun;Oh, Kyu-Sam;Park, Jong-Hun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.5
    • /
    • pp.53-63
    • /
    • 2008
  • In this paper, we propose the regional travel time measurement algorithm using the sequence pattern matching to the type of vehicles between the origin of the region and the end of the region, that could be able to overcome the limit of conventional method such as Probe Car Method or AVI Method by License Plate Recognition. This algorithm recognizes the vehicles as a sequence group with a definite length, and measures the regional travel time by searching the sequence of the origin which is the most highly similar to the sequence of the end. According to the assumption of similarity cost function, there are proposed three types of algorithm, and it will be able to estimate the average travel time that is the most adequate to the information providing period by eliminating the abnormal value caused by inflow and outflow of vehicles. In the result of computer simulation by the length of region, the number of passing cars, the length of sequence, and the average maximum error rate are measured within 3.46%, which means that this algorithm is verified for its superior performance.

  • PDF

Robust Stereo Matching under Radiometric Change based on Weighted Local Descriptor (광량 변화에 강건한 가중치 국부 기술자 기반의 스테레오 정합)

  • Koo, Jamin;Kim, Yong-Ho;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.164-174
    • /
    • 2015
  • In a real scenario, radiometric change has frequently occurred in the stereo image acquisition process using multiple cameras with geometric characteristics or moving a single camera because it has different camera parameters and illumination change. Conventional stereo matching algorithms have a difficulty in finding correct corresponding points because it is assumed that corresponding pixels have similar color values. In this paper, we present a new method based on the local descriptor reflecting intensity, gradient and texture information. Furthermore, an adaptive weight for local descriptor based on the entropy is applied to estimate correct corresponding points under radiometric variation. The proposed method is tested on Middlebury datasets with radiometric changes, and compared with state-of-the-art algorithms. Experimental result shows that the proposed scheme outperforms other comparison algorithms around 5% less matching error on average.

Comparative Study of Estimation Methods of the Endpoint Temperature in Basic Oxygen Furnace Steelmaking Process with Selection of Input Parameters

  • Park, Tae Chang;Kim, Beom Seok;Kim, Tae Young;Jin, Il Bong;Yeo, Yeong Koo
    • Korean Journal of Metals and Materials
    • /
    • v.56 no.11
    • /
    • pp.813-821
    • /
    • 2018
  • The basic oxygen furnace (BOF) steelmaking process in the steel industry is highly complicated, and subject to variations in raw material composition. During the BOF steelmaking process, it is essential to maintain the carbon content and the endpoint temperature at their set points in the liquid steel. This paper presents intelligent models used to estimate the endpoint temperature in the basic oxygen furnace (BOF) steelmaking process. An artificial neural network (ANN) model and a least-squares support vector machine (LSSVM) model are proposed and their estimation performance compared. The classical partial least-squares (PLS) method was also compared with the others. Results of the estimations using the ANN, LSSVM and PLS models were compared with the operation data, and the root-mean square error (RMSE) for each model was calculated to evaluate estimation performance. The RMSE of the LSSVM model 15.91, which turned out to be the best estimation. RMSE values for the ANN and PLS models were 17.24 and 21.31, respectively, indicating their relative estimation performance. The essential input parameters used in the models can be selected by sensitivity analysis. The RMSE for each model was calculated again after a sequential input selection process was used to remove insignificant input parameters. The RMSE of the LSSVM was then 13.21, which is better than the previous RMSE with all 16 parameters. The results show that LSSVM model using 13 input parameters can be utilized to calculate the required values for oxygen volume and coolant needed to optimally adjust the steel target temperature.

Despeckling and Classification of High Resolution SAR Imagery (고해상도 SAR 영상 Speckle 제거 및 분류)

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.5
    • /
    • pp.455-464
    • /
    • 2009
  • Lee(2009) proposed the boundary-adaptive despeckling method using a Bayesian model which is based on the lognormal distribution for image intensity and a Markov random field(MRF) for image texture. This method employs the Point-Jacobian iteration to obtain a maximum a posteriori(MAP) estimate of despeckled imagery. The boundary-adaptive algorithm is designed to use less information from more distant neighbors as the pixel is closer to boundary. It can reduce the possibility to involve the pixel values of adjacent region with different characteristics. The boundary-adaptive scheme was comprehensively evaluated using simulation data and the effectiveness of boundary adaption was proved in Lee(2009). This study, as an extension of Lee(2009), has suggested a modified iteration algorithm of MAP estimation to enhance computational efficiency and to combine classification. The experiment of simulation data shows that the boundary-adaption results in yielding clear boundary as well as reducing error in classification. The boundary-adaptive scheme has also been applied to high resolution Terra-SAR data acquired from the west coast of Youngjong-do, and the results imply that it can improve analytical accuracy in SAR application.

Aerosol Optical Thickness Retrieval Using a Small Satellite

  • Wong, Man Sing;Lee, Kwon-Ho;Nichol, Janet;Kim, Young J.
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.6
    • /
    • pp.605-615
    • /
    • 2010
  • This study demonstrates the feasibility of small satellite, namely PROBA platform with the compact high resolution imaging spectrometer (CHRIS), for aerosol retrieval in Hong Kong. The rationale of our technique is to estimate the aerosol reflectances by decomposing the Top of Atmosphere (TOA) reflectances from surface reflectance and Rayleigh path reflectances. For the determination of surface reflectances, the modified Minimum Reflectance Technique (MRT) is used on three winter ortho-rectified CHRIS images: Dec-18-2005, Feb-07-2006, Nov-09-2006. For validation purpose, MRT image was compared with ground based multispectral radiometer measurements and atmospherically corrected Landsat image. Results show good agreements between CHRIS-derived surface reflectance and both by ground measurement data as well as by Landsat image (r>0.84). The Root-Mean-Square Errors (RMSE) at 485, 551 and 660nm are 0.99%, 1.19%, and 1.53%, respectively. For aerosol retrieval, Look Up Tables (LUT) which are aerosol reflectances as a function of various AOT values were calculated by SBDART code with AERONET inversion products. The CHRIS derived Aerosol Optical Thickness (AOT) images were then validated with AERONET sunphotometer measurements and the differences are 0.05~0.11 (error=10~18%) at 440nm wavelength. The errors are relatively small compared to those from the operational moderate resolution imaging spectroradiometer (MODIS) Deep Blue algorithm (within 30%) and MODIS ocean algorithm (within 20%).