• Title/Summary/Keyword: model-based estimator

Search Result 463, Processing Time 0.03 seconds

RELATION BETWEEN BLACK HOLE MASS AND BULGE LUMINOSITY IN HARD X-RAY SELECTED TYPE 1 AGNS

  • Son, Suyeon;Kim, Minjin;Barth, Aaron J.;Ho, Luis C.
    • Journal of The Korean Astronomical Society
    • /
    • v.55 no.2
    • /
    • pp.37-57
    • /
    • 2022
  • Using I-band images of 35 nearby (z < 0.1) type 1 active galactic nuclei (AGNs) obtained with Hubble Space Telescope, selected from the 70-month Swift-BAT X-ray source catalog, we investigate the photometric properties of the host galaxies. With a careful treatment of the point-spread function (PSF) model and imaging decomposition, we robustly measure the I-band brightness and the effective radius of bulges in our sample. Along with black hole (BH) mass estimates from single-epoch spectroscopic data, we present the relation between BH mass and I-band bulge luminosity (MBH-MI,bul relation) of our sample AGNs. We find that our sample lies offset from the MBH-MI,bul relation of inactive galaxies by 0.4 dex, i.e., at a given bulge luminosity, the BH mass of our sample is systematically smaller than that of inactive galaxies. We also demonstrate that the zero point offset in the MBH-MI,bul relation with respect to inactive galaxies is correlated with the Eddington ratio. Based on the Kormendy relation, we find that the mean surface brightness of ellipticals and classical bulges in our sample is comparable to that of normal galaxies, revealing that bulge brightness is not enhanced in our sample. As a result, we conclude that the deviation in the MBH-MI,bul relation from inactive galaxies is possibly because the scaling factor in the virial BH mass estimator depends on the Eddington ratio.

Intelligent System for the Prediction of Heart Diseases Using Machine Learning Algorithms with Anew Mixed Feature Creation (MFC) technique

  • Rawia Elarabi;Abdelrahman Elsharif Karrar;Murtada El-mukashfi El-taher
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.148-162
    • /
    • 2023
  • Classification systems can significantly assist the medical sector by allowing for the precise and quick diagnosis of diseases. As a result, both doctors and patients will save time. A possible way for identifying risk variables is to use machine learning algorithms. Non-surgical technologies, such as machine learning, are trustworthy and effective in categorizing healthy and heart-disease patients, and they save time and effort. The goal of this study is to create a medical intelligent decision support system based on machine learning for the diagnosis of heart disease. We have used a mixed feature creation (MFC) technique to generate new features from the UCI Cleveland Cardiology dataset. We select the most suitable features by using Least Absolute Shrinkage and Selection Operator (LASSO), Recursive Feature Elimination with Random Forest feature selection (RFE-RF) and the best features of both LASSO RFE-RF (BLR) techniques. Cross-validated and grid-search methods are used to optimize the parameters of the estimator used in applying these algorithms. and classifier performance assessment metrics including classification accuracy, specificity, sensitivity, precision, and F1-Score, of each classification model, along with execution time and RMSE the results are presented independently for comparison. Our proposed work finds the best potential outcome across all available prediction models and improves the system's performance, allowing physicians to diagnose heart patients more accurately.

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

Using GA based Input Selection Method for Artificial Neural Network Modeling Application to Bankruptcy Prediction (유전자 알고리즘을 활용한 인공신경망 모형 최적입력변수의 선정: 부도예측 모형을 중심으로)

  • 홍승현;신경식
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.1
    • /
    • pp.227-249
    • /
    • 2003
  • Prediction of corporate failure using past financial data is a well-documented topic. Early studies of bankruptcy prediction used statistical techniques such as multiple discriminant analysis, logit and probit. Recently, however, numerous studies have demonstrated that artificial intelligence such as neural networks can be an alternative methodology for classification problems to which traditional statistical methods have long been applied. In building neural network model, the selection of independent and dependent variables should be approached with great care and should be treated as model construction process. Irrespective of the efficiency of a teaming procedure in terms of convergence, generalization and stability, the ultimate performance of the estimator will depend on the relevance of the selected input variables and the quality of the data used. Approaches developed in statistical methods such as correlation analysis and stepwise selection method are often very useful. These methods, however, may not be the optimal ones for the development of neural network model. In this paper, we propose a genetic algorithms approach to find an optimal or near optimal input variables fur neural network modeling. The proposed approach is demonstrated by applications to bankruptcy prediction modeling. Our experimental results show that this approach increases overall classification accuracy rate significantly.

  • PDF

Future Trend Impact Analysis Based on Adaptive Neuro-Fuzzy Inference System (ANFIS 접근방식에 의한 미래 트랜드 충격 분석)

  • Kim, Yong-Gil;Moon, Kyung-Il;Choi, Se-Ill
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.4
    • /
    • pp.499-505
    • /
    • 2015
  • Trend Impact Analysis(: TIA) is an advanced forecasting tool used in futures studies for identifying, understanding and analyzing the consequences of unprecedented events on future trends. An adaptive neuro-fuzzy inference system is a kind of artificial neural network that integrates both neural networks and fuzzy logic principles, It is considered to be a universal estimator. In this paper, we propose an advanced mechanism to generate more justifiable estimates to the probability of occurrence of an unprecedented event as a function of time with different degrees of severity using Adaptive Neuro-Fuzzy Inference System(: ANFIS). The key idea of the paper is to enhance the generic process of reasoning with fuzzy logic and neural network by adding the additional step of attributes simulation, as unprecedented events do not occur all of a sudden but rather their occurrence is affected by change in the values of a set of attributes. An ANFIS approach is used to identify the occurrence and severity of an event, depending on the values of its trigger attributes. The trigger attributes can be calculated by a stochastic dynamic model; then different scenarios are generated using Monte-Carlo simulation. To compare the proposed method, a simple simulation is provided concerning the impact of river basin drought on the annual flow of water into a lake.

A Brake Pad Wear Compensation Method and Performance Evaluation for ElectroMechanical Brake (전기기계식 제동장치의 제동패드 마모보상방법 및 성능평가)

  • Baek, Seung-Koo;Oh, Hyuck-Keun;Park, Choon-Soo;Kim, Seog-Won
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.581-588
    • /
    • 2020
  • This study examined a brake pad wear compensation method for an Electro-Mechanical Brake (EMB) using the braking test device. A three-phase Interior Permanent Magnet Synchronous Motor (IPMSM) was applied to drive the actuator of an EMB. Current control, speed control, and position control were used to control the clamping force of the EMB. The wear compensation method was performed using a software algorithm that updates the motor model equation by comparing the motor output torque current with a reference current. In addition, a simple first-order motor model equation was applied to estimate the output clamping force. The operation time to the maximum clamping force increased within 0.1 seconds compared to the brake pad in its initial condition. The experiment verified that the reference operating time was within 0.5 seconds, and the maximum value of the clamping force was satisfied under the wear condition. The wear compensation method based on the software algorithm in this paper can be performed in the pre-departure test of rolling stock.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taek-Soo;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support for multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To data, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

The relationship between the numbers of natural teeth and nutritional status of elderly in Korea -based on 2007~2009 national health and nutrition survey data- (한국 노인의 자연치아 수와 영양소 섭취상태와의 관련성 -2007~2009년 국민건강영양조사 자료에 근거하여-)

  • Shin, Bo-Mi;Bae, Su-Myoung;Ryu, Da-Young;Choi, Yong-Keum
    • Journal of Korean society of Dental Hygiene
    • /
    • v.12 no.3
    • /
    • pp.521-531
    • /
    • 2012
  • Objectives : The aim of this study was to evaluate the relation between the state of dental health(number of natural teeth) and nutritional status of Korean elderly using Korean Dietary Reference Intakes, which was an objective standard for nutritional intake based on database of Korea National Health and Nutrition Examination Survey, large scale of sample obtained by the government. Methods : Complex sampling procedure was used to analyze the fourth data(2007-2009) of Korea National Health and Nutrition Examination Survey. When preparing planning file, the estimator of variance as a stratification variance (variance name : kstrata), population of survey unit as a cluster variance(variance name : PSU), and previous exam and nutritional related weighted as a weighted were analyzed by considering reproduced survey and nutritional related total weighted. Complex samples chi-square test was used to estimate the relation between number of natural teeth and inadequate intake and relation factors included in the model were analyzed by complex samples logistic regression analysis. Results : The group of edentulous had a higher risk to intake less nutrient, except calcium, riboflavin and vitamin C than recommended level comparing to reference group which had natural teeth more than 20(phosphate : OR=1.763; 95% CI=1.273-2.443, thiamine : OR=1.748; 95% CI=1.276-2.395, protein : OR=1.610; 95% CI=1.213-2.138). Conclusions : The number of teeth in Korean elderly over 65 years old had a relation with nutritional status in this investigation. Especially, intake level of nutrients was different between the edentulous group and the reference group. Therefore, dental health care is needed from young and middle age to keep health dental condition for through whole life as well as old age. Although the dental condition of the aged is not good, it is evitable to educate them about the relation between dental health and nutritional ingestion to take balanced nutrition, we think.

A Digital Phase-locked Loop design based on Minimum Variance Finite Impulse Response Filter with Optimal Horizon Size (최적의 측정값 구간의 길이를 갖는 최소 공분산 유한 임펄스 응답 필터 기반 디지털 위상 고정 루프 설계)

  • You, Sung-Hyun;Pae, Dong-Sung;Choi, Hyun-Duck
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.591-598
    • /
    • 2021
  • The digital phase-locked loops(DPLL) is a circuit used for phase synchronization and has been generally used in various fields such as communication and circuit fields. State estimators are used to design digital phase-locked loops, and infinite impulse response state estimators such as the well-known Kalman filter have been used. In general, the performance of the infinite impulse response state estimator-based digital phase-locked loop is excellent, but a sudden performance degradation may occur in unexpected situations such as inaccuracy of initial value, model error, and disturbance. In this paper, we propose a minimum variance finite impulse response filter with optimal horizon for designing a new digital phase-locked loop. A numerical method is introduced to obtain the measured value interval length, which is an important parameter of the proposed finite impulse response filter, and to obtain a gain, the covariance matrix of the error is set as a cost function, and a linear matrix inequality is used to minimize it. In order to verify the superiority and robustness of the proposed digital phase-locked loop, a simulation was performed for comparison and analysis with the existing method in a situation where noise information was inaccurate.