• Title/Summary/Keyword: Least Squares Algorithm

Search Result 567, Processing Time 0.033 seconds

Design of Fuzzy Clustering-based Neural Networks Classifier for Sorting Black Plastics with the Aid of Raman Spectroscopy (라만분광법에 의한 흑색 플라스틱 선별을 위한 퍼지 클러스터링기반 신경회로망 분류기 설계)

  • Kim, Eun-Hu;Bae, Jong-Soo;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1131-1140
    • /
    • 2017
  • This study is concerned with a design methodology of optimized fuzzy clustering-based neural network classifier for classifying black plastic. Since the amount of waste plastic is increased every year, the technique for recycling waste plastic is getting more attention. The proposed classifier is on a basis of architecture of radial basis function neural network. The hidden layer of the proposed classifier is composed to FCM clustering instead of activation functions, while connection weights are formed as the linear functions and their coefficients are estimated by the local least squares estimator (LLSE)-based learning. Because the raw dataset collected from Raman spectroscopy include high-dimensional variables over about three thousands, principal component analysis(PCA) is applied for the dimensional reduction. In addition, artificial bee colony(ABC), which is one of the evolutionary algorithm, is used in order to identify the architecture and parameters of the proposed network. In experiment, the proposed classifier sorts the three kinds of plastics which is the most largely discharged in the real world. The effectiveness of the proposed classifier is proved through a comparison of performance between dataset obtained from chemical analysis and entire dataset extracted directly from Raman spectroscopy.

Deriving the Effective Atomic Number with a Dual-Energy Image Set Acquired by the Big Bore CT Simulator

  • Jung, Seongmoon;Kim, Bitbyeol;Kim, Jung-in;Park, Jong Min;Choi, Chang Heon
    • Journal of Radiation Protection and Research
    • /
    • v.45 no.4
    • /
    • pp.171-177
    • /
    • 2020
  • Background: This study aims to determine the effective atomic number (Zeff) from dual-energy image sets obtained using a conventional computed tomography (CT) simulator. The estimated Zeff can be used for deriving the stopping power and material decomposition of CT images, thereby improving dose calculations in radiation therapy. Materials and Methods: An electron-density phantom was scanned using Philips Brilliance CT Big Bore at 80 and 140 kVp. The estimated Zeff values were compared with those obtained using the calibration phantom by applying the Rutherford, Schneider, and Joshi methods. The fitting parameters were optimized using the nonlinear least squares regression algorithm. The fitting curve and mass attenuation data were obtained from the National Institute of Standards and Technology. The fitting parameters obtained from stopping power and material decomposition of CT images, were validated by estimating the residual errors between the reference and calculated Zeff values. Next, the calculation accuracy of Zeff was evaluated by comparing the calculated values with the reference Zeff values of insert plugs. The exposure levels of patients under additional CT scanning at 80, 120, and 140 kVp were evaluated by measuring the weighted CT dose index (CTDIw). Results and Discussion: The residual errors of the fitting parameters were lower than 2%. The best and worst Zeff values were obtained using the Schneider and Joshi methods, respectively. The maximum differences between the reference and calculated values were 11.3% (for lung during inhalation), 4.7% (for adipose tissue), and 9.8% (for lung during inhalation) when applying the Rutherford, Schneider, and Joshi methods, respectively. Under dual-energy scanning (80 and 140 kVp), the patient exposure level was approximately twice that in general single-energy scanning (120 kVp). Conclusion: Zeff was calculated from two image sets scanned by conventional single-energy CT simulator. The results obtained using three different methods were compared. The Zeff calculation based on single-energy exhibited appropriate feasibility.

Power spectral density method performance in detecting damages by chloride attack on coastal RC bridge

  • Mehrdad, Hadizadeh-Bazaz;Ignacio J., Navarro;Victor, Yepes
    • Structural Engineering and Mechanics
    • /
    • v.85 no.2
    • /
    • pp.197-206
    • /
    • 2023
  • The deterioration caused by chloride penetration and carbonation plays a significant role in a concrete structure in a marine environment. The chloride corrosion in some marine concrete structures is invisible but can be dangerous in a sudden collapse. Therefore, as a novelty, this research investigates the ability of a non-destructive damage detection method named the Power Spectral Density (PSD) to diagnose damages caused only by chloride ions in concrete structures. Furthermore, the accuracy of this method in estimating the amount of annual damage caused by chloride in various parts and positions exposed to seawater was investigated. For this purpose, the RC Arosa bridge in Spain, which connects the island to the mainland via seawater, was numerically modeled and analyzed. As the first step, each element's bridge position was calculated, along with the chloride corrosion percentage in the reinforcements. The next step predicted the existence, location, and timing of damage to the entire concrete part of the bridge based on the amount of rebar corrosion each year. The PSD method was used to monitor the annual loss of reinforcement cross-section area, changes in dynamic characteristics such as stiffness and mass, and each year of the bridge structure's life using sensitivity equations and the linear least squares algorithm. This study showed that using different approaches to the PSD method based on rebar chloride corrosion and assuming 10% errors in software analysis can help predict the location and almost exact amount of damage zones over time.

Metabolic Signatures of Adrenal Steroids in Preeclamptic Serum and Placenta Using Weighting Factor-Dependent Acquisitions

  • Lee, Chaelin;Oh, Min-Jeong;Cho, Geum Joon;Byun, Dong Jun;Seo, Hong Seog;Choi, Man Ho
    • Mass Spectrometry Letters
    • /
    • v.13 no.1
    • /
    • pp.11-19
    • /
    • 2022
  • Although translational research is referred to clinical chemistry measures, correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm have not been carefully considered in bioanalytical assays yet. The objective of this study was to identify steroidogenic roles in preeclampsia and verify accuracy of quantitative results by comparing two different linear regression models with weighting factor of 1 and 1/x2. A liquid chromatography-mass spectrometry (LC-MS)-based adrenal steroid assay was conducted to reveal metabolic signatures of preeclampsia in both serum and placenta samples obtained 15 preeclamptic patients and 17 age-matched control pregnant women (33.9 ± 4.2 vs. 32.8 ± 5.6 yr, respectively) at 34~36 gestational weeks. Percent biases in the unweighted model (wi = 1) were inversely proportional to concentrations (-739.4 ~ 852.9%) while those of weighted regression (wi = 1/x2) were < 18% for all variables. The optimized LC-MS combined with the weighted linear regression resulted in significantly increased maternal serum levels of pregnenolone, 21-deoxycortisol, and tetrahydrocortisone (P < 0.05 for all) in preeclampsia. Serum metabolic ratio of (tetrahydrocortisol + allo-tetrahydrocortisol) / tetrahydrocortisone indicating 11β-hydroxysteroid dehydrogenase type 2 was decreased (P < 0.005) in patients. In placenta, local concentrations of androstenedione were changed while its metabolic ratio to 17α-hydroxyprogesterone responsible for 17,20-lyase activity was significantly decreased in patients (P = 0.002). The current bioanalytical LC-MS assay with corrected weighting factor of 1/x2 may provide reliable and accurate quantitative outcomes, suggesting altered steroidogenesis in preeclampsia patients at late gestational weeks in the third trimester.

Admittance Model-Based Nanodynamic Control of Diamond Turning Machine (어드미턴스 모델을 이용한 다이아몬드 터닝머시인의 초정밀진동제어)

  • Jeong, Sanghwa;Kim, Sangsuk
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.10
    • /
    • pp.154-160
    • /
    • 1996
  • The control of diamond turning is usually achieved through a laser-interferometer feedback of slide position. The limitation of this control scheme is that the feedback signal does not account for additional dynamics of the tool post and the material removal process. If the tool post is rigid and the material removal process is relatively static, then such a non-collocated position feedback control scheme may surfice. However, as the accuracy requirement gets tighter and desired surface cnotours become more complex, the need for a direct tool-tip sensing becomes inevitable. The physical constraints of the machining process prohibit any reasonable implementation of a tool-tip motion measurement. It is proposed that the measured force normal to the face of the workpiece can be filtered through an appropriate admittance transfer function to result in the estimated dapth of cut. This can be compared to the desired depth of cut to generate the adjustment control action in additn to position feedback control. In this work, the design methodology on the admittance model-based control with a conventional controller is presented. The recursive least-squares algorithm with forgetting factor is proposed to identify the parameters and update the cutting process in real time. The normal cutting forces are measured to identify the cutting dynamics in the real diamond turning process using the precision dynamoneter. Based on the parameter estimation of cutting dynamics and the admitance model-based nanodynamic control scheme, simulation results are shown.

  • PDF

Feature selection for text data via sparse principal component analysis (희소주성분분석을 이용한 텍스트데이터의 단어선택)

  • Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.501-514
    • /
    • 2023
  • When analyzing high dimensional data such as text data, if we input all the variables as explanatory variables, statistical learning procedures may suffer from over-fitting problems. Furthermore, computational efficiency can deteriorate with a large number of variables. Dimensionality reduction techniques such as feature selection or feature extraction are useful for dealing with these problems. The sparse principal component analysis (SPCA) is one of the regularized least squares methods which employs an elastic net-type objective function. The SPCA can be used to remove insignificant principal components and identify important variables from noisy observations. In this study, we propose a dimension reduction procedure for text data based on the SPCA. Applying the proposed procedure to real data, we find that the reduced feature set maintains sufficient information in text data while the size of the feature set is reduced by removing redundant variables. As a result, the proposed procedure can improve classification accuracy and computational efficiency, especially for some classifiers such as the k-nearest neighbors algorithm.

Vision-based Obstacle State Estimation and Collision Prediction using LSM and CPA for UAV Autonomous Landing (무인항공기의 자동 착륙을 위한 LSM 및 CPA를 활용한 영상 기반 장애물 상태 추정 및 충돌 예측)

  • Seongbong Lee;Cheonman Park;Hyeji Kim;Dongjin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.485-492
    • /
    • 2021
  • Vision-based autonomous precision landing technology for UAVs requires precise position estimation and landing guidance technology. Also, for safe landing, it must be designed to determine the safety of the landing point against ground obstacles and to guide the landing only when the safety is ensured. In this paper, we proposes vision-based navigation, and algorithms for determining the safety of landing point to perform autonomous precision landings. To perform vision-based navigation, CNN technology is used to detect landing pad and the detection information is used to derive an integrated navigation solution. In addition, design and apply Kalman filters to improve position estimation performance. In order to determine the safety of the landing point, we perform the obstacle detection and position estimation in the same manner, and estimate the speed of the obstacle using LSM. The collision or not with the obstacle is determined based on the CPA calculated by using the estimated state of the obstacle. Finally, we perform flight test to verify the proposed algorithm.

An Indoor Localization Algorithm of UWB and INS Fusion based on Hypothesis Testing

  • Long Cheng;Yuanyuan Shi;Chen Cui;Yuqing Zhou
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1317-1340
    • /
    • 2024
  • With the rapid development of information technology, people's demands on precise indoor positioning are increasing. Wireless sensor network, as the most commonly used indoor positioning sensor, performs a vital part for precise indoor positioning. However, in indoor positioning, obstacles and other uncontrollable factors make the localization precision not very accurate. Ultra-wide band (UWB) can achieve high precision centimeter-level positioning capability. Inertial navigation system (INS), which is a totally independent system of guidance, has high positioning accuracy. The combination of UWB and INS can not only decrease the impact of non-line-of-sight (NLOS) on localization, but also solve the accumulated error problem of inertial navigation system. In the paper, a fused UWB and INS positioning method is presented. The UWB data is firstly clustered using the Fuzzy C-means (FCM). And the Z hypothesis testing is proposed to determine whether there is a NLOS distance on a link where a beacon node is located. If there is, then the beacon node is removed, and conversely used to localize the mobile node using Least Squares localization. When the number of remaining beacon nodes is less than three, a robust extended Kalman filter with M-estimation would be utilized for localizing mobile nodes. The UWB is merged with the INS data by using the extended Kalman filter to acquire the final location estimate. Simulation and experimental results indicate that the proposed method has superior localization precision in comparison with the current algorithms.

Three-dimensional anisotropic inversion of resistivity tomography data in an abandoned mine area (폐광지역에서의 3차원 이방성 전기비저항 토모그래피 영상화)

  • Yi, Myeong-Jong;Kim, Jung-Ho;Son, Jeong-Sul
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.7-17
    • /
    • 2011
  • We have developed an inversion code for three-dimensional (3D) resistivity tomography including the anisotropy effect. The algorithm is based on the finite element approximations for the forward modelling and Active Constraint Balancing method is adopted to enhance the resolving power of the smoothness constraint least-squares inversion. Using numerical experiments, we have shown that anisotropic inversion is viable to get an accurate image of the subsurface when the subsurface shows strong electrical anisotropy. Moreover, anisotropy can be used as additional information in the interpretation of subsurface. This algorithm was also applied to the field dataset acquired in the abandoned old mine area, where a high-rise apartment block has been built up over a mining tunnel. The main purpose of the investigation was to evaluate the safety analysis of the building due to old mining activities. Strong electrical anisotropy has been observed and it was proven to be caused by geological setting of the site. To handle the anisotropy problem, field data were inverted by a 3D anisotropic tomography algorithm and we could obtain 3D subsurface images, which matches well with geology mapping observations. The inversion results have been used to provide the subsurface model for the safety analysis in rock engineering and we could assure the residents that the apartment has no problem in its safety after the completion of investigation works.

Development and application of GLS OD matrix estimation with genetic algorithm for Seoul inner-ringroad (유전알고리즘을 이용한 OD 추정모형의 개발과 적용에 관한 연구 (서울시 내부순환도로를 대상으로))

  • 임용택;김현명;백승걸
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.4
    • /
    • pp.117-126
    • /
    • 2000
  • Conventional methods for collecting origin-destination trips have been mainly relied on the surveys of home or roadside interview. However, the methods tend to be costly, labor intensive and time disruptive to the trip makers, thus the methods are not considered suitable for Planning applications such as routing guidance, arterial management and information Provision, as the parts of deployments in Intelligent Transport Systems Motivated by the problems, more economic ways to estimate origin-destination trip tables have been studied since the late 1970s. Some of them, which have been estimating O-D table from link traffic counts are generally Entropy maximizing, Maximum likelihood, Generalized least squares(GLS), and Bayesian inference estimation etc. In the Paper, with user equilibrium constraint we formulate GLS problem for estimating O-D trips and develop a solution a1gorithm by using Genetic Algorithm, which has been known as a g1oba1 searching technique. For the purpose of evaluating the method, we apply it to Seoul inner ringroad and compare it with gradient method proposed by Spiess(1990). From the resu1ts we fond that the method developed in the Paper is superior to other.

  • PDF