• Title/Summary/Keyword: Levenberg-Marquardt Algorithm

Search Result 96, Processing Time 0.025 seconds

An Effective Method for Dimensionality Reduction in High-Dimensional Space (고차원 공간에서 효과적인 차원 축소 기법)

  • Jeong Seung-Do;Kim Sang-Wook;Choi Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.4 s.310
    • /
    • pp.88-102
    • /
    • 2006
  • In multimedia information retrieval, multimedia data are represented as vectors in high dimensional space. To search these vectors effectively, a variety of indexing methods have been proposed. However, the performance of these indexing methods degrades dramatically with increasing dimensionality, which is known as the dimensionality curse. To resolve the dimensionality curse, dimensionality reduction methods have been proposed. They map feature vectors in high dimensional space into the ones in low dimensional space before indexing the data. This paper proposes a method for dimensionality reduction based on a function approximating the Euclidean distance, which makes use of the norm and angle components of a vector. First, we identify the causes of the errors in angle estimation for approximating the Euclidean distance, and discuss basic directions to reduce those errors. Then, we propose a novel method for dimensionality reduction that composes a set of subvectors from a feature vector and maintains only the norm and the estimated angle for every subvector. The selection of a good reference vector is important for accurate estimation of the angle component. We present criteria for being a good reference vector, and propose a method that chooses a good reference vector by using Levenberg-Marquardt algorithm. Also, we define a novel distance function, and formally prove that the distance function lower-bounds the Euclidean distance. This implies that our approach does not incur any false dismissals in reducing the dimensionality effectively. Finally, we verify the superiority of the proposed method via performance evaluation with extensive experiments.

RPC Model Generation from the Physical Sensor Model (영상의 물리적 센서모델을 이용한 RPC 모델 추출)

  • Kim, Hye-Jin;Kim, Jae-Bin;Kim, Yong-Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.11 no.4 s.27
    • /
    • pp.21-27
    • /
    • 2003
  • The rational polynomial coefficients(RPC) model is a generalized sensor model that is used as an alternative for the physical sensor model for IKONOS-2 and QuickBird. As the number of sensors increases along with greater complexity, and as the need for standard sensor model has become important, the applicability of the RPC model is also increasing. The RPC model can be substituted for all sensor models, such as the projective camera the linear pushbroom sensor and the SAR This paper is aimed at generating a RPC model from the physical sensor model of the KOMPSAT-1(Korean Multi-Purpose Satellite) and aerial photography. The KOMPSAT-1 collects $510{\sim}730nm$ panchromatic images with a ground sample distance (GSD) of 6.6m and a swath width of 17 km by pushbroom scanning. We generated the RPC from a physical sensor model of KOMPSAT-1 and aerial photography. The iterative least square solution based on Levenberg-Marquardt algorithm is used to estimate the RPC. In addition, data normalization and regularization are applied to improve the accuracy and minimize noise. And the accuracy of the test was evaluated based on the 2-D image coordinates. From this test, we were able to find that the RPC model is suitable for both KOMPSAT-1 and aerial photography.

  • PDF

Prediction of Failure Time of Tunnel Applying the Curve Fitting Techniques (곡선적합기법을 이용한 터널의 파괴시간 예측)

  • Yoon, Yong-Kyun;Jo, Young-Do
    • Tunnel and Underground Space
    • /
    • v.20 no.2
    • /
    • pp.97-104
    • /
    • 2010
  • The materials failure relation $\ddot{\Omega}=A{(\dot{\Omega})}^\alpha$ where $\Omega$ is a measurable quantity such as displacement and the dot superscript is the time derivative, may be used to analyze the accelerating creep of materials. Coefficients, A and $\alpha$, are determined by fitting given data sets. In this study, it is tried to predict the failure time of tunnel using the materials failure relation. Four fitting techniques of applying the materials failure relation are attempted to forecast a failure time. Log velocity versus log acceleration technique, log time versus log velocity technique, inverse velocity technique are based on the linear least squares fits and non-linear least squares technique utilizes the Levenberg-Marquardt algorithm. Since the log velocity versus log acceleration technique utilizes a logarithmic representation of the materials failure relation, it indicates the suitability of the materials failure relation applied to predict a failure time of tunnel. A linear correlation between log velocity and log acceleration appears satisfactory(R=0.84) and this represents that the materials failure relation is a suitable model for predicting a failure time of tunnel. Through comparing the real failure time of tunnel with the predicted failure times from four curve fittings, it is shown that the log time versus log velocity technique results in the best prediction.

PROPERTIES OF THE VARIATION OF THE INFRARED EMISSION OF OH/IR STARS II. THE L BAND LIGHT CURVES

  • Kwon, Young-Joo;Suh, Kyung-Won
    • Journal of The Korean Astronomical Society
    • /
    • v.43 no.4
    • /
    • pp.123-133
    • /
    • 2010
  • In order to study properties of the pulsation in the infrared emission for long period variables, we collect and analyze the infrared observational data at L band for 12 OH/IR. The observation data cover about three decades including recent data from the ISO and Spitzer. We use the Marquardt-Levenberg algorithm to determine the pulsation period and amplitude for each star and compare them with results of previous investigations at infrared and radio bands. We obtain the relationship between the pulsation periods and the amplitudes at L band. Contrary to the results at K band, there is no difference of the trends in the short and long period regions of the period-luminosity relation at L band. This may be due to the molecular absorption effect at K band. The correlations among the L band parameters, IRAS [12-25] colors, and K band parameters may be explained as results of the dust shell parameters affected by the stellar pulsation. The large scatter of the correlation could be due to the existence of a distribution of central stars with various masses and pulsation modes.

A Novel Scheme for detection of Parkinson’s disorder from Hand-eye Co-ordination behavior and DaTscan Images

  • Sivanesan, Ramya;Anwar, Alvia;Talwar, Abhishek;R, Menaka.;R, Karthik.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4367-4385
    • /
    • 2016
  • With millions of people across the globe suffering from Parkinson's disease (PD), an objective, confirmatory test for the same is yet to be developed. This research aims to develop a system which can assist the doctor in objectively saying whether the patient is normal or under risk of PD. The proposed work combines the eye-hand co-ordination behaviour with the DaTscan images in order to determine the risk of this disorder. Initially, eye-hand coordination level of the patient is assessed through a hardware module. Then, the DaTscan image is analysed and used to extract certain geometrical parameters which shall indicate the presence of PD. These parameters are then finally fed into a Multi-Layer Perceptron Neural Network using Levenberg-Marquardt (LM) Back propagation training algorithm. Experimental results indicate that the proposed system exhibits an accuracy of around 93%.

Improving CMD Areal Density Analysis: Algorithms and Strategies

  • Wilson, R.E.
    • Journal of Astronomy and Space Sciences
    • /
    • v.31 no.2
    • /
    • pp.121-130
    • /
    • 2014
  • Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD's) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMD-generation program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities ($\mathcal{A}$), and large variation in $\mathcal{A}$ are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.

Evaluation of existing bridges using neural networks

  • Molina, Augusto V.;Chou, Karen C.
    • Structural Engineering and Mechanics
    • /
    • v.13 no.2
    • /
    • pp.187-209
    • /
    • 2002
  • The infrastructure system in the United States has been aging faster than the resource available to restore them. Therefore decision for allocating the resources is based in part on the condition of the structural system. This paper proposes to use neural network to predict the overall rating of the structural system because of the successful applications of neural network to other fields which require a "symptom-diagnostic" type relationship. The goal of this paper is to illustrate the potential of using neural network in civil engineering applications and, particularly, in bridge evaluations. Data collected by the Tennessee Department of Transportation were used as "test bed" for the study. Multi-layer feed forward networks were developed using the Levenberg-Marquardt training algorithm. All the neural networks consisted of at least one hidden layer of neurons. Hyperbolic tangent transfer functions were used in the first hidden layer and log-sigmoid transfer functions were used in the subsequent hidden and output layers. The best performing neural network consisted of three hidden layers. This network contained three neurons in the first hidden layer, two neurons in the second hidden layer and one neuron in the third hidden layer. The neural network performed well based on a target error of 10%. The results of this study indicate that the potential for using neural networks for the evaluation of infrastructure systems is very good.

Prediction of compressive strength of bacteria incorporated geopolymer concrete by using ANN and MARS

  • X., John Britto;Muthuraj, M.P.
    • Structural Engineering and Mechanics
    • /
    • v.70 no.6
    • /
    • pp.671-681
    • /
    • 2019
  • This paper examines the applicability of artificial neural network (ANN) and multivariate adaptive regression splines (MARS) to predict the compressive strength of bacteria incorporated geopolymer concrete (GPC). The mix is composed of new bacterial strain, manufactured sand, ground granulated blast furnace slag, silica fume, metakaolin and fly ash. The concentration of sodium hydroxide (NaOH) is maintained at 8 Molar, sodium silicate ($Na_2SiO_3$) to NaOH weight ratio is 2.33 and the alkaline liquid to binder ratio of 0.35 and ambient curing temperature ($28^{\circ}C$) is maintained for all the mixtures. In ANN, back-propagation training technique was employed for updating the weights of each layer based on the error in the network output. Levenberg-Marquardt algorithm was used for feed-forward back-propagation. MARS model was developed by establishing a relationship between a set of predictors and dependent variables. MARS is based on a divide and conquers strategy partitioning the training data sets into separate regions; each gets its own regression line. Six models based on ANN and MARS were developed to predict the compressive strength of bacteria incorporated GPC for 1, 3, 7, 28, 56 and 90 days. About 70% of the total 84 data sets obtained from experiments were used for development of the models and remaining 30% data was utilized for testing. From the study, it is observed that the predicted values from the models are found to be in good agreement with the corresponding experimental values and the developed models are robust and reliable.

Evaluation of Performance of Artificial Neural Network based Hardening Model for Titanium Alloy Considering Strain Rate and Temperature (티타늄 합금의 변형률속도 및 온도를 고려한 인공신경망 기반 경화모델 성능평가)

  • M. Kim;S. Lim;Y. Kim
    • Transactions of Materials Processing
    • /
    • v.33 no.2
    • /
    • pp.96-102
    • /
    • 2024
  • This study addresses evaluation of performance of hardening model for a titanium alloy (Ti6Al4V) based on the artificial neural network (ANN) regarding the strain rate and the temperature. Uniaxial compression tests were carried out at different strain rates from 0.001 /s to 10 /s and temperatures from 575 ℃ To 975 ℃. Using the experimental data, ANN models were trained and tested with different hyperparameters, such as size of hidden layer and optimizer. The input features were determined with the equivalent plastic strain, strain rate, and temperature while the output value was set to the equivalent stress. When the number of data is sufficient with a smooth tendency, both the Bayesian regulation (BR) and the Levenberg-Marquardt (LM) show good performance to predict the flow behavior. However, only BR algorithm shows a predictability when the number of data is insufficient. Furthermore, a proper size of the hidden layer must be confirmed to describe the behavior with the limited number of the data.

Efficient Localization Algorithm for Non-Linear Least Square Estimation (비선형적 최소제곱법을 위한 효율적인 위치추정기법)

  • Lee, Jung-Kyu;Kim, YoungJoon;Kim, Seong-Cheol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.88-95
    • /
    • 2015
  • This paper presents the study of the efficient localization algorithm for non-linear least square estimation. Although non-linear least square(NLS) estimation algorithms are more accurate algorithms than linear least square(LLS) estimation, NLS algorithms have more computation loads because of iterations. This study proposed the efficient algorithm which reduced complexity for small accuracy loss in NLS estimation. Simulation results show the accuracy and complexity of the localization system compared to the proposed algorithm and conventional schemes.