• Title/Summary/Keyword: Nonlinear Modeling

Search Result 1,600, Processing Time 0.039 seconds

Comparison of Algorithms for Generating Parametric Image of Cerebral Blood Flow Using ${H_2}^{15}O$ PET Positron Emission Tomography (${H_2}^{15}O$ PET을 이용한 뇌혈류 파라메트릭 영상 구성을 위한 알고리즘 비교)

  • Lee, Jae-Sung;Lee, Dong-Soo;Park, Kwang-Suk;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.5
    • /
    • pp.288-300
    • /
    • 2003
  • Purpose: To obtain regional blood flow and tissue-blood partition coefficient with time-activity curves from ${H_2}^{15}O$ PET, fitting of some parameters in the Kety model is conventionally accomplished by nonlinear least squares (NLS) analysis. However, NLS requires considerable compuation time then is impractical for pixel-by-pixel analysis to generate parametric images of these parameters. In this study, we investigated several fast parameter estimation methods for the parametric image generation and compared their statistical reliability and computational efficiency. Materials and Methods: These methods included linear least squres (LLS), linear weighted least squares (LWLS), linear generalized least squares (GLS), linear generalized weighted least squares (GWLS), weighted Integration (WI), and model-based clustering method (CAKS). ${H_2}^{15}O$ dynamic brain PET with Poisson noise component was simulated using numerical Zubal brain phantom. Error and bias in the estimation of rCBF and partition coefficient, and computation time in various noise environments was estimated and compared. In audition, parametric images from ${H_2}^{15}O$ dynamic brain PET data peformed on 16 healthy volunteers under various physiological conditions was compared to examine the utility of these methods for real human data. Results: These fast algorithms produced parametric images with similar image qualify and statistical reliability. When CAKS and LLS methods were used combinedly, computation time was significantly reduced and less than 30 seconds for $128{\times}128{\times}46$ images on Pentium III processor. Conclusion: Parametric images of rCBF and partition coefficient with good statistical properties can be generated with short computation time which is acceptable in clinical situation.

Modeling and Intelligent Control for Activated Sludge Process (활성슬러지 공정을 위한 모델링과 지능제어의 적용)

  • Cheon, Seong-pyo;Kim, Bongchul;Kim, Sungshin;Kim, Chang-Won;Kim, Sanghyun;Woo, Hae-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.22 no.10
    • /
    • pp.1905-1919
    • /
    • 2000
  • The main motivation of this research is to develop an intelligent control strategy for Activated Sludge Process (ASP). ASP is a complex and nonlinear dynamic system because of the characteristic of wastewater, the change in influent flow rate, weather conditions, and etc. The mathematical model of ASP also includes uncertainties which are ignored or not considered by process engineer or controller designer. The ASP is generally controlled by a PID controller that consists of fixed proportional, integral, and derivative gain values. The PID gains are adjusted by the expert who has much experience in the ASP. The ASP model based on $Matlab^{(R)}5.3/Simulink^{(R)}3.0$ is developed in this paper. The performance of the model is tested by IWA(International Water Association) and COST(European Cooperation in the field of Scientific and Technical Research) data that include steady-state results during 14 days. The advantage of the developed model is that the user can easily modify or change the controller by the help of the graphical user interface. The ASP model as a typical nonlinear system can be used to simulate and test the proposed controller for an educational purpose. Various control methods are applied to the ASP model and the control results are compared to apply the proposed intelligent control strategy to a real ASP. Three control methods are designed and tested: conventional PID controller, fuzzy logic control approach to modify setpoints, and fuzzy-PID control method. The proposed setpoints changer based on the fuzzy logic shows a better performance and robustness under disturbances. The objective function can be defined and included in the proposed control strategy to improve the effluent water quality and to reduce the operating cost in a real ASP.

  • PDF

Multiple Linear Analysis for Generating Parametric Images of Irreversible Radiotracer (비가역 방사성추적자 파라메터 영상을 위한 다중선형분석법)

  • Kim, Su-Jin;Lee, Jae-Sung;Lee, Won-Woo;Kim, Yu-Kyeong;Jang, Sung-June;Son, Kyu-Ri;Kim, Hyo-Cheol;Chung, Jin-Wook;Lee, Dong-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.4
    • /
    • pp.317-325
    • /
    • 2007
  • Purpose: Biological parameters can be quantified using dynamic PET data with compartment modeling and Nonlinear Least Square (NLS) estimation. However, the generation of parametric images using the NLS is not appropriate because of the initial value problem and excessive computation time. In irreversible model, Patlak graphical analysis (PGA) has been commonly used as an alternative to the NLS method. In PGA, however, the start time ($t^*$, time where linear phase starts) has to be determined. In this study, we suggest a new Multiple Linear Analysis for irreversible radiotracer (MLAIR) to estimate fluoride bone influx rate (Ki). Methods: $[^{18}F]Fluoride$ dynamic PET scans was acquired for 60 min in three normal mini-pigs. The plasma input curve was derived using blood sampling from the femoral artery. Tissue time-activity curves were measured by drawing region of interests (ROls) on the femur head, vertebra, and muscle. Parametric images of Ki were generated using MLAIR and PGA methods. Result: In ROI analysis, estimated Ki values using MLAIR and PGA method was slightly higher than those of NLS, but the results of MLAIR and PGA were equivalent. Patlak slopes (Ki) were changed with different $t^*$ in low uptake region. Compared with PGA, the quality of parametric image was considerably improved using new method. Conclusion: The results showed that the MLAIR was efficient and robust method for the generation of Ki parametric image from $[^{18}F]Fluoride$ PET. It will be also a good alternative to PGA for the radiotracers with irreversible three compartment model.

A Comparative Evaluation of Multiple Meteorological Datasets for the Rice Yield Prediction at the County Level in South Korea (우리나라 시군단위 벼 수확량 예측을 위한 다종 기상자료의 비교평가)

  • Cho, Subin;Youn, Youjeong;Kim, Seoyeon;Jeong, Yemin;Kim, Gunah;Kang, Jonggu;Kim, Kwangjin;Cho, Jaeil;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.337-357
    • /
    • 2021
  • Because the growth of paddy rice is affected by meteorological factors, the selection of appropriate meteorological variables is essential to build a rice yield prediction model. This paper examines the suitability of multiple meteorological datasets for the rice yield modeling in South Korea, 1996-2019, and a hindcast experiment for rice yield using a machine learning method by considering the nonlinear relationships between meteorological variables and the rice yield. In addition to the ASOS in-situ observations, we used CRU-JRA ver. 2.1 and ERA5 reanalysis. From the multiple meteorological datasets, we extracted the four common variables (air temperature, relative humidity, solar radiation, and precipitation) and analyzed the characteristics of each data and the associations with rice yields. CRU-JRA ver. 2.1 showed an overall agreement with the other datasets. While relative humidity had a rare relationship with rice yields, solar radiation showed a somewhat high correlation with rice yields. Using the air temperature, solar radiation, and precipitation of July, August, and September, we built a random forest model for the hindcast experiments of rice yields. The model with CRU-JRA ver. 2.1 showed the best performance with a correlation coefficient of 0.772. The solar radiation in the prediction model had the most significant importance among the variables, which is in accordance with the generic agricultural knowledge. This paper has an implication for selecting from multiple meteorological datasets for rice yield modeling.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Modeling of Sensorineural Hearing Loss for the Evaluation of Digital Hearing Aid Algorithms (디지털 보청기 알고리즘 평가를 위한 감음신경성 난청의 모델링)

  • 김동욱;박영철
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.1
    • /
    • pp.59-68
    • /
    • 1998
  • Digital hearing aids offer many advantages over conventional analog hearing aids. With the advent of high speed digital signal processing chips, new digital techniques have been introduced to digital hearing aids. In addition, the evaluation of new ideas in hearing aids is necessarily accompanied by intensive subject-based clinical tests which requires much time and cost. In this paper, we present an objective method to evaluate and predict the performance of hearing aid systems without the help of such subject-based tests. In the hearing impairment simulation(HIS) algorithm, a sensorineural hearing impairment medel is established from auditory test data of the impaired subject being simulated. Also, the nonlinear behavior of the loudness recruitment is defined using hearing loss functions generated from the measurements. To transform the natural input sound into the impaired one, a frequency sampling filter is designed. The filter is continuously refreshed with the level-dependent frequency response function provided by the impairment model. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP. Signals processed with the real-time system were presented to normal subjects and their auditory data modified by the system was measured. The sensorineural hearing impairment was simulated and tested. The threshold of hearing and the speech discrimination tests exhibited the efficiency of the system in its use for the hearing impairment simulation. Using the HIS system we evaluated three typical hearing aid algorithms.

  • PDF

Stress dissipation characteristics of four implant thread designs evaluated by 3D finite element modeling (4종 임플란트 나사산 디자인의 응력분산 특성에 대한 3차원 유한요소해석 연구)

  • Nam, Ok-Hyun;Yu, Won-Jae;Kyung, Hee-Moon
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.53 no.2
    • /
    • pp.120-127
    • /
    • 2015
  • Purpose: The aim was to investigate the effect of implant thread designs on the stress dissipation of the implant. Materials and methods: The threads evaluated in this study included the V-shaped, buttress, reverse buttress, and square-shaped threads, which were of the same size (depth). Building four different implant/bone complexes each consisting of an implant with one of the 4 different threads on its cylindrical body ($4.1mm{\times}10mm$), a force of 100 N was applied onto the top of implant abutment at $30^{\circ}$ with the implant axis. In order to simulate different osseointegration stages at the implant/bone interfaces, a nonlinear contact condition was used to simulate immature osseointegration and a bonding condition for mature osseointegration states. Results: Stress distribution pattern around the implant differed depending on the osseointegration states. Stress levels as well as the differences in the stress between the analysis models (with different threads) were higher in the case of the immature osseointegration state. Both the stress levels and the differences between analysis models became lower at the completely osseointegrated state. Stress dissipation characteristics of the V-shape thread was in the middle of the four threads in both the immature and mature states of osseointegration. These results indicated that implant thread design may have biomechanical impact on the implant bed bone until the osseointegration process has been finished. Conclusion: The stress dissipation characteristics of V-shape thread was in the middle of the four threads in both the immature and mature states of osseointegration.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

The Impact of Self-efficacy on Job Engagement and Job Performance of SMEs' Members: SEM-ANN Analysis (중소기업 조직구성원의 자기효능감이 직무열의와 직무성과에 미치는 영향: 구조모형분석-인공신경망 분석의 적용)

  • Kang, Tae-Won;Lee, Yong-Ki;Lee, Yong-Suk
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.13 no.6
    • /
    • pp.155-166
    • /
    • 2018
  • The purpose of this study is to analyze the impact of self-efficacy of SMEs' organization members on job engagement and job performance, and to analyze the difference between gender and marital status by applying SEM-ANN analysis. To accomplish the study purpose, 285 valid samples were collected from 400 SMEs' organization members and analyzed. In this study, self - efficacy consisted of three sub-dimensions: self-confidence, self-regulation efficacy, and task difficulty preference. As a result of the analysis, self - efficacy such as self-confidence, self-regulation efficacy, and task difficulty preference had a positive direct effect on job engagement. In addition, self-efficacy and self-control efficacy have a positive effect on job performance, but the preference of task difficulty has no significant effect. In addition, job engagement has a positive(+) effect on job performance, and has a mediating role in the relationship between self-efficacy and job performance. Also, married males preferred self-regulation efficacy, while females preferred self-regulation and self-control efficacy regardless of marital status. The purpose of this study is to present the framework of self-efficacy-job engagement-job performance of SMEs by measuring the self-efficacy related researches mainly in education and service industries, and is meaningful that companies can help to find the basis of management of organization members by gender and marital status of organization members. In addition, the SEM-ANN analysis process of this study is different in that it explains the nonlinear (nonobservative) relationship that can analyze the influence or the combination of the reference variables in the linear (compensatory) relation using the SEM.

Predicting Forest Gross Primary Production Using Machine Learning Algorithms (머신러닝 기법의 산림 총일차생산성 예측 모델 비교)

  • Lee, Bora;Jang, Keunchang;Kim, Eunsook;Kang, Minseok;Chun, Jung-Hwa;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.29-41
    • /
    • 2019
  • Terrestrial Gross Primary Production (GPP) is the largest global carbon flux, and forest ecosystems are important because of the ability to store much more significant amounts of carbon than other terrestrial ecosystems. There have been several attempts to estimate GPP using mechanism-based models. However, mechanism-based models including biological, chemical, and physical processes are limited due to a lack of flexibility in predicting non-stationary ecological processes, which are caused by a local and global change. Instead mechanism-free methods are strongly recommended to estimate nonlinear dynamics that occur in nature like GPP. Therefore, we used the mechanism-free machine learning techniques to estimate the daily GPP. In this study, support vector machine (SVM), random forest (RF) and artificial neural network (ANN) were used and compared with the traditional multiple linear regression model (LM). MODIS products and meteorological parameters from eddy covariance data were employed to train the machine learning and LM models from 2006 to 2013. GPP prediction models were compared with daily GPP from eddy covariance measurement in a deciduous forest in South Korea in 2014 and 2015. Statistical analysis including correlation coefficient (R), root mean square error (RMSE) and mean squared error (MSE) were used to evaluate the performance of models. In general, the models from machine-learning algorithms (R = 0.85 - 0.93, MSE = 1.00 - 2.05, p < 0.001) showed better performance than linear regression model (R = 0.82 - 0.92, MSE = 1.24 - 2.45, p < 0.001). These results provide insight into high predictability and the possibility of expansion through the use of the mechanism-free machine-learning models and remote sensing for predicting non-stationary ecological processes such as seasonal GPP.