• Title/Summary/Keyword: Tuning time

Search Result 847, Processing Time 0.026 seconds

Comparison of classical and reliable controller performances for seismic response mitigation

  • Kavyashree, B.G.;Patil, Shantharama;Rao, Vidya S.
    • Earthquakes and Structures
    • /
    • v.20 no.3
    • /
    • pp.353-364
    • /
    • 2021
  • Natural hazards like earthquakes, high winds, and tsunami are a threat all the time for multi-story structures. The environmental forces cannot be clogged but the structures can be prevented from these natural hazards by using protective systems. The structural control can be achieved by using protective systems like the passive, active, semi-active, and hybrid protective systems; but the semi-active protective system has gained importance because of its adaptability to the active systems and reliability of the passive systems. Therefore, a semi-active protective system for the earthquake forces has been adopted in this work. Magneto-Rheological (MR) damper is used in the structure as a semi-active protective system; which is connected to the current driver and proposed controller. The Proportional Integral Derivative (PID) controller and reliable PID controller are two proposed controllers, which will actuate the MR damper and the desired force is generated to mitigate the vibration of the structural response subjected to the earthquake. PID controller and reliable PID controller are designed and tuned using Ziegler-Nichols tuning technique along with the MR damper simulated in Simulink toolbox and MATLAB to obtain the reduced vibration in a three-story benchmark structure. The earthquake is considered to be uncertain; where the proposed control algorithm works well during the presence of earthquake; this paper considers robustness to provide satisfactory resilience against this uncertainty. In this work, two different earthquakes are considered like El-Centro and Northridge earthquakes for simulation with different controllers. In this paper performances of the structure with and without two controllers are compared and results are discussed.

Modeling of Boiler Steam System in a Thermal Power Plant Based on Generalized Regression Neural Network (GRNN 알고리즘을 이용한 화력발전소 보일러 증기계통의 모델링에 관한 연구)

  • Lee, Soon-Young;Lee, Jung-Hoon
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.349-354
    • /
    • 2022
  • In thermal power plants, boiler models have been used widely in evaluating logic configurations, performing system tuning and applying control theory, etc. Furthermore, proper plant models are needed to design the accurate controllers. Sometimes, mathematical models can not exactly describe a power plant due to time varying, nonlinearity, uncertainties and complexity of the thermal power plants. In this case, a neural network can be a useful method to estimate such systems. In this paper, the models of boiler steam system in a thermal power plant are developed by using a generalized regression neural network(GRNN). The models of the superheater, reheater, attemperator and drum are designed by using GRNN and the models are trained and validate with the real data obtained in 540[MW] power plant. The validation results showed that proposed models agree with actual outputs of the drum boiler well.

Research on Data Tuning Methods to Improve the Anomaly Detection Performance of Industrial Control Systems (산업제어시스템의 이상 탐지 성능 개선을 위한 데이터 보정 방안 연구)

  • JUN, SANGSO;Lee, Kyung-ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.4
    • /
    • pp.691-708
    • /
    • 2022
  • As the technology of machine learning and deep learning became common, it began to be applied to research on anomaly(abnormal) detection of industrial control systems. In Korea, the HAI dataset was developed and published to activate artificial intelligence research for abnormal detection of industrial control systems, and an AI contest for detecting industrial control system security threats is being conducted. Most of the anomaly detection studies have been to create a learning model with improved performance through the ensemble model method, which is applied either by modifying the existing deep learning algorithm or by applying it together with other algorithms. In this study, a study was conducted to improve the performance of anomaly detection with a post-processing method that detects abnormal data and corrects the labeling results, rather than the learning algorithm and data pre-processing process. Results It was confirmed that the results were improved by about 10% or more compared to the anomaly detection performance of the existing model.

Boosting WiscKey Key-Value Store Using NVDIMM-N (NVDIMM-N을 활용한 WiscKey 키-밸류 스토어 성능 향상)

  • Il Han Song;Bo hyun Lee;Sang Won Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.3
    • /
    • pp.111-116
    • /
    • 2023
  • The WiscKey database, which optimizes overhead by compaction of the LSM tree-based Key-Value database, stores the value in a separate file, and stores only the key and value addresses in the database. Each time an fsync system call function is used to ensure data integrity in the process of storing values. In previously conducted studies, workload performance was reduced by up to 5.8 times as a result of performing the workload without calling the fsync system call function. However, it is difficult to ensure the data integrity of the database without using the fsync system call function. In this paper, to reduce the overhead of the fsync system call function while performing workloads on the WiscKey database, we use NVDIMM caching techniques to ensure data integrity while improving the performance of the WiscKey database.

Target-free vision-based approach for vibration measurement and damage identification of truss bridges

  • Dong Tan;Zhenghao Ding;Jun Li;Hong Hao
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.421-436
    • /
    • 2023
  • This paper presents a vibration displacement measurement and damage identification method for a space truss structure from its vibration videos. Features from Accelerated Segment Test (FAST) algorithm is combined with adaptive threshold strategy to detect the feature points of high quality within the Region of Interest (ROI), around each node of the truss structure. Then these points are tracked by Kanade-Lucas-Tomasi (KLT) algorithm along the video frame sequences to obtain the vibration displacement time histories. For some cases with the image plane not parallel to the truss structural plane, the scale factors cannot be applied directly. Therefore, these videos are processed with homography transformation. After scale factor adaptation, tracking results are expressed in physical units and compared with ground truth data. The main operational frequencies and the corresponding mode shapes are identified by using Subspace Stochastic Identification (SSI) from the obtained vibration displacement responses and compared with ground truth data. Structural damages are quantified by elemental stiffness reductions. A Bayesian inference-based objective function is constructed based on natural frequencies to identify the damage by model updating. The Success-History based Adaptive Differential Evolution with Linear Population Size Reduction (L-SHADE) is applied to minimise the objective function by tuning the damage parameter of each element. The locations and severities of damage in each case are then identified. The accuracy and effectiveness are verified by comparison of the identified results with the ground truth data.

Study on the Application of Artificial Intelligence Model for CT Quality Control (CT 정도관리를 위한 인공지능 모델 적용에 관한 연구)

  • Ho Seong Hwang;Dong Hyun Kim;Ho Chul Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.182-189
    • /
    • 2023
  • CT is a medical device that acquires medical images based on Attenuation coefficient of human organs related to X-rays. In addition, using this theory, it can acquire sagittal and coronal planes and 3D images of the human body. Then, CT is essential device for universal diagnostic test. But Exposure of CT scan is so high that it is regulated and managed with special medical equipment. As the special medical equipment, CT must implement quality control. In detail of quality control, Spatial resolution of existing phantom imaging tests, Contrast resolution and clinical image evaluation are qualitative tests. These tests are not objective, so the reliability of the CT undermine trust. Therefore, by applying an artificial intelligence classification model, we wanted to confirm the possibility of quantitative evaluation of the qualitative evaluation part of the phantom test. We used intelligence classification models (VGG19, DenseNet201, EfficientNet B2, inception_resnet_v2, ResNet50V2, and Xception). And the fine-tuning process used for learning was additionally performed. As a result, in all classification models, the accuracy of spatial resolution was 0.9562 or higher, the precision was 0.9535, the recall was 1, the loss value was 0.1774, and the learning time was from a maximum of 14 minutes to a minimum of 8 minutes and 10 seconds. Through the experimental results, it was concluded that the artificial intelligence model can be applied to CT implements quality control in spatial resolution and contrast resolution.

Infrastructure Anomaly Analysis for Data-center Failure Prevention: Based on RRCF and Prophet Ensemble Analysis (데이터센터 장애 예방을 위한 인프라 이상징후 분석: RRCF와 Prophet Ensemble 분석 기반)

  • Hyun-Jong Kim;Sung-Keun Kim;Byoung-Whan Chun;Kyong-Bog, Jin;Seung-Jeong Yang
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.113-124
    • /
    • 2022
  • Various methods using machine learning and big data have been applied to prevent failures in Data Centers. However, there are many limitations to referencing individual equipment-based performance indicators or to being practically utilized as an approach that does not consider the infrastructure operating environment. In this study, the performance indicators of individual infrastructure equipment are integrated monitoring and the performance indicators of various equipment are segmented and graded to make a single numerical value. Data pre-processing based on experience in infrastructure operation. And an ensemble of RRCF (Robust Random Cut Forest) analysis and Prophet analysis model led to reliable analysis results in detecting anomalies. A failure analysis system was implemented to facilitate the use of Data Center operators. It can provide a preemptive response to Data Center failures and an appropriate tuning time.

Measurement strategy of a system parameters for the PI current control of the A.C. motor (교류 전동기의 PI 전류제어를 위한 시스템 파라미터 계측법)

  • Jung-Keyng Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.223-229
    • /
    • 2023
  • This Paper propose the method that measure main system parameters for PI(proportional-integral) current control of a.c. motor adopting the vector control technique. For current control, the PI control input is could be tuning by several selective methods. Among the several methods, the method that using the main system parameters, wire resistance and inductance, are frequently used. In this study, the technique to dissect and measure these two system parameters through the results of simple feedback control. This analytic measurement method is measuring parameters step by step dissecting the results of P control using simple proportional feedback gain about the unit step or multiple step reference command. This strategy is an real time analytic measurement method that calculate current control gains of torque component and flux component both for vector control of A.C. motor without introducing the further measurement circuits and complex measuring algorithms.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Optimal Transmission Scheduling for All-to-all Broadcast in WDM Optical Passive Star Networks) (수동적인 스타형 파장 분할 다중 방식인 광 네트워크에서의 전방송을 위한 최적 전송 스케쥴링)

  • Jang, Jong-Jun;Park, Young-Ho;Hong, Man-Pyo;Wee, Kyu-Bum;Yeh, Hong-Jin
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.1
    • /
    • pp.44-52
    • /
    • 2000
  • This paper is contented with packet transmission scheduling problem for repeating all-to-all broadcasts in WDM optical passive-star networks in which there are N nodes and k wavelengths. It is assumed that each node has one tunable transmitter and one fixed-tuned receiver, and each transmitter can tune to k different wavelengths. The tuning delay represents the time taken for a transmitter to tune from one wavelength to another and represented as ${\delta}$(>0) in units of packet durations. We define all-to-all broadcast as the one where every node transmits packets to all the other nodes except itself. So, there are in total N(N-1) packets to be transmitted for an all-to-all broadcast. The optimal transmission scheduling is to schedule In such a way that all packets can be transmitted within the minimum time. In this paper, we propose the condition for optimal transmission schedules and present an optimal transmission scheduling algorithm for arbitrary values of N, k, and ${\delta}$ The cycle length of the optimal schedules is $max{[\frac{N}{k}](M-1)$, $k{\delta}+N-1$}.

  • PDF