• Title/Summary/Keyword: Non-Linear Algorithm

Search Result 788, Processing Time 0.029 seconds

Classification Model of Chronic Gastritis According to The Feature Extraction Method of Radial Artery Pulse Signal (맥파의 특징점 추출 방법에 따른 만성위염 판별 모형)

  • Choi, Sang-Ho;Shin, Ki-Young;Kim, Jeauk;Jin, Seung-Oh;Lee, Tea-Bum
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.1
    • /
    • pp.185-194
    • /
    • 2014
  • One in every 10 persons suffer from chronic gastritis in Korea. Endoscopy is most commonly used to diagnose the chronic gastritis. Endoscopic diagnosis is precise but it is accompanied with pain and high cost. According to pulse diagnosis in Traditional East Asian Medicine, health problems in stomach can be diagnosed with radial pulse signals in 'Guan' location in the right wrist, which are non-invasive and cost-effective. In this study, we developed a classification model of chronic gastritis using pulse signals in right 'Guan' location. We used both linear discrimination method and logistic regression model with respect to pulse features obtained with a peak-valley detection algorithm and a Gaussian model. As a result, we obtained sensitivity ranged between 77%~89% and specificity ranged between 72%~83% depending on classification models and feature extraction methods, and the average classification rates were approximately 80%, irrespective of the models. Specifically, the Gaussian model were featured by superior sensitivities (89.1% and 87.5%) while the peak-valley detection method showed superior specificities (82.8% and 81.3%), and the average classification rate (sensitivity + specificity) of the Gaussian model was 80.9% which was 1.2% ahead of the peak-valley method. In conclusion, we obtained a reliable classification model for the chronic gastritis based on the radial pulse feature extraction algorithms, where the Gaussian model was featured by outperformed sensitivity and the peak-valley method was featured by outperformed specificity.

Patterning Zooplankton Dynamics in the Regulated Nakdong River by Means of the Self-Organizing Map (자가조직화 지도 방법을 이용한 조절된 낙동강 내 동물플랑크톤 역동성의 모형화)

  • Kim, Dong-Kyun;Joo, Gea-Jae;Jeong, Kwang-Seuk;Chang, Kwang-Hyson;Kim, Hyun-Woo
    • Korean Journal of Ecology and Environment
    • /
    • v.39 no.1 s.115
    • /
    • pp.52-61
    • /
    • 2006
  • The aim of this study was to analyze the seasonal patterns of zooplankton community dynamics in the lower Nakdong River (Mulgum, RK; river kilometer; 27 km from the estuarine barrage), with a Self-Organizing Map (SOM) based on weekly sampled data collected over ten years(1994 ${\sim}$ 2003). It is well known that zooplankton groups had important role in the food web of freshwater ecosystems, however, less attention has been paid to this group compared with other community constituents. A non-linear patterning algorithm of the SOM was applied to discover the relationship among river environments and zooplankton community dynamics. Limnological variables (water temperature, dissolved oxygen, pH , Secchi transparency, turbidity, chlorophyll a, discharge, etc.) were taken into account to implement patterning seasonal changes of zooplankton community structures (consisting of rotifers, cladocerans and copepods). The trained SOM model allocated zooplankton on the map plane with limnological parameters. Three zooplankton groups had high similarities to one another in their changing seasonal patterns, Among the limnological variables, water temporature was highly related to the zooplankton community dynamics (especially for cladocerans). The SOM model illustrated the suppression of zooplankton due to the increased river discharge, particularly in summer. Chlorophyll a concentrations were separated from zooplankton data set on the map plane, which would intimate the herbivorous activity of dominant grazers. This study introduces the zooplankton dynamics associated with limnological parameters using a nonlinear method, and the information will be useful for managing the river ecosystem, with respect to the food web interactions.

Construction of T$_1$ Map Image (T1 이완시간의 영상화)

  • 정은기;서진석;이종태;추성실;이삼현;권영길
    • Progress in Medical Physics
    • /
    • v.6 no.2
    • /
    • pp.83-92
    • /
    • 1995
  • The T1 mapping of an human anatomy may give a characteristic contrast among the various tissues and the normal/abnormal tissues. Here, the methodology of constructing T1 map out of several images with different TRs, will be described using non-linear curve fitting. The general curve fitting algorithm requires the initial trial values T1t and Mot for the variables to be fitted. Three different methods of suppling the trial T1t and Mot are suggested and compared for the efficiency and the accuracy. The curve-fitting routine was written in ANSI C and excuted on a SUN workstation. Several distilled-water phantoms with various concentrations of Gd-DTPA were prepared to examine the accuracy of the curve-fitting program. An MR image was used as the true proton density image without any random noise, and several images with different TRs were generated with the theoretical T1 relaxation times 250, 500, and 1000msec. The random noise of 1, 5, and 10% were embedded into the simulated images. These images were used to generate the T1 map, and the resultant T1 maps for each T1 were analyzed to study the effect of the random noise on the T1 map.

  • PDF

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.

An Effective Feature Extraction Method for Fault Diagnosis of Induction Motors (유도전동기의 고장 진단을 위한 효과적인 특징 추출 방법)

  • Nguyen, Hung N.;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.7
    • /
    • pp.23-35
    • /
    • 2013
  • This paper proposes an effective technique that is used to automatically extract feature vectors from vibration signals for fault classification systems. Conventional mel-frequency cepstral coefficients (MFCCs) are sensitive to noise of vibration signals, degrading classification accuracy. To solve this problem, this paper proposes spectral envelope cepstral coefficients (SECC) analysis, where a 4-step filter bank based on spectral envelopes of vibration signals is used: (1) a linear predictive coding (LPC) algorithm is used to specify spectral envelopes of all faulty vibration signals, (2) all envelopes are averaged to get general spectral shape, (3) a gradient descent method is used to find extremes of the average envelope and its frequencies, (4) a non-overlapped filter is used to have centers calculated from distances between valley frequencies of the envelope. This 4-step filter bank is then used in cepstral coefficients computation to extract feature vectors. Finally, a multi-layer support vector machine (MLSVM) with various sigma values uses these special parameters to identify faulty types of induction motors. Experimental results indicate that the proposed extraction method outperforms other feature extraction algorithms, yielding more than about 99.65% of classification accuracy.

Gauss-Newton Based Emitter Location Method Using Successive TDOA and FDOA Measurements (연속 측정된 TDOA와 FDOA를 이용한 Gauss-Newton 기법 기반의 신호원 위치추정 방법)

  • Kim, Yong-Hee;Kim, Dong-Gyu;Han, Jin-Woo;Song, Kyu-Ha;Kim, Hyoung-Nam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.76-84
    • /
    • 2013
  • In the passive emitter localization using instantaneous TDOA (time difference of arrival) and FDOA (frequency difference of arrival) measurements, the estimation accuracy can be improved by collecting additional measurements. To achieve this goal, it is required to increase the number of the sensors. However, in electronic warfare environment, a large number of sensors cause the loss of military strength due to high probability of intercept. Also, the additional processes should be considered such as the data link and the clock synchronization between the sensors. Hence, in this paper, the passive localization of a stationary emitter is presented by using the successive TDOA and FDOA measurements from two moving sensors. In this case, since an independent pair of sensors is added in the data set at every instant of measurement, each pair of sensors does not share the common reference sensor. Therefore, the QCLS (quadratic correction least squares) methods cannot be applied, in which all pairs of sensor should include the common reference sensor. For this reason, a Gauss-Newton algorithm is adopted to solve the non-linear least square problem. In addition, to show the performance of the proposed method, we compare the RMSE (root mean square error) of the estimates with CRLB (Cramer-Rao lower bound) and derived the CEP (circular error probable) planes to analyze the expected estimation performance on the 2-dimensional space.

Reduction of Radiographic Quantum Noise Using Adaptive Weighted Median Filter (적응성 가중메디안 필터를 이용한 방사선 투과영상의 양자 잡음 제거)

  • Lee, Hoo-Min;Nam, Moon-Hyon
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.22 no.5
    • /
    • pp.465-473
    • /
    • 2002
  • Images are easily corrupted by noise during the data transmission, data capture and data processing. A technical method of noise analyzing and adaptive filtering for reducing of quantum noise in radiography is presented. By adjusting the characteristics of the filter according to local statistics around each pixel of the image as moving windowing, it is possible to suppress noise sufficiently while preserve edge and other significant information required in reading. We have proposed adaptive weighted median(AWM) filters based on local statistics. We show two ways of realizing the AWM filters. One is a simple type of AWM filter, whose weights are given by a simple non-linear function of three local characteristics. The other is the AWM filter which is constructed by homogeneous factor(HF). Homogeneous factor(HF) from the quantum noise models that enables the filter to recognize the local structures of the image is introduced, and an algorithm for determining the HF fitted to the detection systems with various inner statistical properties is proposed. We show by the experimented that the performances of proposed method is superior to these of other filters and models in preserving small details and suppressing the noise at homogeneous region. The proposed algorithms were implemented by visual C++ language on a IBM-PC Pentium 550 for testing purposes, the effects and results of the noise filtering were proposed by comparing with images of the other existing filtering methods.

The Comparative Study of NHPP Software Reliability Model Based on Log and Exponential Power Intensity Function (로그 및 지수파우어 강도함수를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.445-452
    • /
    • 2015
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log and power intensity function (log linear, log power and exponential power), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure, using real data set for the sake of proposing log and power intensity function, was employed. This analysis of failure data compared with log and power intensity function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.

Design of Data-centroid Radial Basis Function Neural Network with Extended Polynomial Type and Its Optimization (데이터 중심 다항식 확장형 RBF 신경회로망의 설계 및 최적화)

  • Oh, Sung-Kwun;Kim, Young-Hoon;Park, Ho-Sung;Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.3
    • /
    • pp.639-647
    • /
    • 2011
  • In this paper, we introduce a design methodology of data-centroid Radial Basis Function neural networks with extended polynomial function. The two underlying design mechanisms of such networks involve K-means clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on K-means clustering method for efficient processing of data and the optimization of model was carried out using PSO. In this paper, as the connection weight of RBF neural networks, we are able to use four types of polynomials such as simplified, linear, quadratic, and modified quadratic. Using K-means clustering, the center values of Gaussian function as activation function are selected. And the PSO-based RBF neural networks results in a structurally optimized structure and comes with a higher level of flexibility than the one encountered in the conventional RBF neural networks. The PSO-based design procedure being applied at each node of RBF neural networks leads to the selection of preferred parameters with specific local characteristics (such as the number of input variables, a specific set of input variables, and the distribution constant value in activation function) available within the RBF neural networks. To evaluate the performance of the proposed data-centroid RBF neural network with extended polynomial function, the model is experimented with using the nonlinear process data(2-Dimensional synthetic data and Mackey-Glass time series process data) and the Machine Learning dataset(NOx emission process data in gas turbine plant, Automobile Miles per Gallon(MPG) data, and Boston housing data). For the characteristic analysis of the given entire dataset with non-linearity as well as the efficient construction and evaluation of the dynamic network model, the partition of the given entire dataset distinguishes between two cases of Division I(training dataset and testing dataset) and Division II(training dataset, validation dataset, and testing dataset). A comparative analysis shows that the proposed RBF neural networks produces model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Research and development on image luminance meter of road tunnel internal and external (도로터널 내/외부의 영상휘도 측정기 연구개발)

  • Jang, Soon-Chul;Park, Sung-Lim;Ko, Seok-Yong;Lee, Mi-Ae
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.17 no.1
    • /
    • pp.1-9
    • /
    • 2015
  • This paper introduces the development of imaging luminance meter which measures the luminances of external/internal road tunnel. The developed imaging luminance meter complies with both L20 method and Veiling luminance method of the international standards, CIE88. In this paper the L20 method is mainly presented because most of tunnels currently adapt L20 method. The developed system has an embedded computer to operate at stand-alone. The system has a ethernet port, a heater, a fan, a defroster, a wiper and sun shielder. Compensation algorithm is applied for correcting non-linear responses to the luminance and integration time. The accuracy of measurement is less than 1% when it calibrated at the public certification institute. The developed system was also tested at the real field, road tunnel. The test results were very similar with the reference luminance meter and showed that the developed system operates well at the real field. Partial sensor saturations were happened to show the less luminance, because there were the high reflecting objects in the real field. Further study should be followed for high luminance measurement.