• Title/Summary/Keyword: Weighted Average Model

Search Result 226, Processing Time 0.024 seconds

T-S fuzzy PID control based on RCGAs for the automatic steering system of a ship (선박자동조타를 위한 RCGA기반 T-S 퍼지 PID 제어)

  • Yu-Soo LEE;Soon-Kyu HWANG;Jong-Kap AHN
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.59 no.1
    • /
    • pp.44-54
    • /
    • 2023
  • In this study, the second-order Nomoto's nonlinear expansion model was implemented as a Tagaki-Sugeno fuzzy model based on the heading angular velocity to design the automatic steering system of a ship considering nonlinear elements. A Tagaki-Sugeno fuzzy PID controller was designed using the applied fuzzy membership functions from the Tagaki-Sugeno fuzzy model. The linear models and fuzzy membership functions of each operating point of a given nonlinear expansion model were simultaneously tuned using a genetic algorithm. It was confirmed that the implemented Tagaki-Sugeno fuzzy model could accurately describe the given nonlinear expansion model through the Zig-Zag experiment. The optimal parameters of the sub-PID controller for each operating point of the Tagaki-Sugeno fuzzy model were searched using a genetic algorithm. The evaluation function for searching the optimal parameters considered the route extension due to course deviation and the resistance component of the ship by steering. By adding a penalty function to the evaluation function, the performance of the automatic steering system of the ship could be evaluated to track the set course without overshooting when changing the course. It was confirmed that the sub-PID controller for each operating point followed the set course to minimize the evaluation function without overshoot when changing the course. The outputs of the tuned sub-PID controllers were combined in a weighted average method using the membership functions of the Tagaki-Sugeno fuzzy model. The proposed Tagaki-Sugeno fuzzy PID controller was applied to the second-order Nomoto's nonlinear expansion model. As a result of examining the transient response characteristics for the set course change, it was confirmed that the set course tracking was satisfactorily performed.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Incorporation of collapse safety margin into direct earthquake loss estimate

  • Xian, Lina;He, Zheng;Ou, Xiaoying
    • Earthquakes and Structures
    • /
    • v.10 no.2
    • /
    • pp.429-450
    • /
    • 2016
  • An attempt has been made to incorporate the concept of collapse safety margin into the procedures proposed in the performance-based earthquake engineering (PBEE) framework for direct earthquake loss estimation, in which the collapse probability curve obtained from incremental dynamic analysis (IDA) is mathematically characterized with the S-type fitting model. The regressive collapse probability curve is then used to identify non-collapse cases and collapse cases. With the assumed lognormal probability distribution for non-collapse damage indexes, the expected direct earthquake loss ratio is calculated from the weighted average over several damage states for non-collapse cases. Collapse safety margin is shown to be strongly related with sustained damage endurance of structures. Such endurance exhibits a strong link with expected direct earthquake loss. The results from the case study on three concrete frames indicate that increase in cross section cannot always achieve a more desirable output of collapse safety margin and less direct earthquake loss. It is a more effective way to acquire wider collapse safety margin and less direct earthquake loss through proper enhancement of reinforcement in structural components. Interestingly, total expected direct earthquake loss ratio seems to be insensitive a change in cross section. It has demonstrated a consistent correlation with collapse safety margin. The results also indicates that, if direct economic loss is seriously concerned, it is of much significance to reduce the probability of occurrence of moderate and even severe damage, as well as the probability of structural collapse.

A Study of Seam Tracking by Arc Sensor Using Current Area Difference Method (전류 면적차를 이용한 아크 센서의 용접선 추적에 관한 연구)

  • 김용재;이세헌;엄기원
    • Journal of Welding and Joining
    • /
    • v.14 no.6
    • /
    • pp.131-139
    • /
    • 1996
  • The response of the arc sensor using the welding current and/or welding voltage as its outputs has been obtained by the analysis and/or experiments of the static characteristics of arc sensor. But in order to improve the reliability of arc sensor, it is necessary to know its dynamic characteristics. So in this paper, it is presented the dynamic model of arc sensor including the power source, arc voltage, electrode burnoff rate, and wire feed rate. A numerical simulation of the dynamic model of arc sensor was implemented, computing the welding current with input of CTWD. The results of computer simulations and experiments of $CO_2$arc welding showed that a linear relationship between weaving center - weld line distance and current area difference was established. Additionally, a real-time weld seam tracking system interfaced with industrial welding robot was constructed, the result of the weld seam tracking experiment for weld line with an initial offset error of 5$^{\circ}$was good.

  • PDF

Shape Optimization of a Rotating Two-Pass Duct with a Guide Vane in the Turning Region (회전하는 냉각유로의 곡관부에 부착된 가이드 베인의 형상 최적설계)

  • Moon, Mi-Ae;Kim, Kwang-Yong
    • The KSFM Journal of Fluid Machinery
    • /
    • v.14 no.1
    • /
    • pp.66-76
    • /
    • 2011
  • The heat transfer and pressure loss characteristics of a rotating two-pass channel with a guide vane in the turning region have been studied using three-dimensional Reynolds-averaged Navier-Stokes (RANS) analysis, and the shape of the guide vane has been optimized using surrogate modeling optimization technique. For the optimization, thickness, location and angle of the guide vanes have been selected as design variables. The objective function has been defined as a linear combination of the heat transfer and the friction loss related terms with a weighting factor. Latin hypercube sampling has been applied to determine the design points as design of experiments. A weighted-average surrogate model, PBA has been used as the surrogate model. The guide vane in the turning region does not influence the heat transfer in the first passage upstream of the turning region, but enhances largely the heat transfer in the turning region and the second passage. In an example of the optimization, the objective function has been increased by 13.6%.

A PNN approach for combining multiple forecasts (예측치 결합을 위한 PNN 접근방법)

  • Jun, Duk-Bin;Shin, Hyo-Duk;Lee, Jung-Jin
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.26 no.3
    • /
    • pp.193-199
    • /
    • 2000
  • In many studies, considerable attention has been focussed upon choosing a model which represents underlying process of time series and forecasting the future. In the real world, however, there may be some cases that one model can not reflect all the characteristics of original time series. Under such circumstances, we may get better performance by combining the forecasts from several models. The most popular methods for combining forecasts involve taking a weighted average of multiple forecasts. But the weights are usually unstable. In cases the assumptions of normality and unbiasedness for forecast errors are satisfied, a Bayesian method can be used for updating the weights. In the real world, however, there are many circumstances the Bayesian method is not appropriate. This paper proposes a PNN(Probabilistic Neural Net) approach as a method for combining forecasts that can be applied when the assumption of normality or unbiasedness for forecast errors is not satisfied. In this paper, PNN method, which is similar to Bayesian approach, is suggested as an updating method of the unstable weights in the combination of the forecasts. The PNN method has been usually used in the field of pattern recognition. Unlike the Bayesian approach, it requires no assumption of a specific prior distribution because it gets probabilities by using the distribution estimated from given data. Empirical results reveal that the PNN method offers superior predictive capabilities.

  • PDF

Development of Simplified Immersed Boundary Method for Analysis of Movable Structures (가동물체형 구조물 해석을 위한 Simplified Immersed Boundary법의 개발)

  • Lee, Kwang-Ho;Kim, Do-Sam
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.3
    • /
    • pp.93-100
    • /
    • 2021
  • Since the IB (Immersed Boundary) method, which can perform coupling analysis with objects and fluids having an impermeable boundary of arbitrary shape on a fixed grid system, has been developed, the IB method in various CFD models is increasing. The representative IB methods are the directing-forcing method and the ghost cell method. The directing-forcing type method numerically satisfies the boundary condition from the fluid force calculated at the boundary surface of the structure, and the ghost-cell type method is a computational method that satisfies the boundary condition through interpolation by placing a virtual cell inside the obstacle. These IB methods have a disadvantage in that the computational algorithm is complex. In this study, the simplified immersed boundary (SIB) method enables the analysis of temporary structures on a fixed grid system and is easy to expand to three proposed dimensions. The SIB method proposed in this study is based on a one-field model for immiscible two-phase fluid that assumes that the density function of each phase moves with the center of local mass. In addition, the volume-weighted average method using the density function of the solid was applied to handle moving solid structures, and the CIP method was applied to the advection calculation to prevent numerical diffusion. To examine the analysis performance of the proposed SIB method, a numerical simulation was performed on an object falling to the free water surface. The numerical analysis result reproduced the object falling to the free water surface well.

High Noise Density Median Filter Method for Denoising Cancer Images Using Image Processing Techniques

  • Priyadharsini.M, Suriya;Sathiaseelan, J.G.R
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.308-318
    • /
    • 2022
  • Noise is a serious issue. While sending images via electronic communication, Impulse noise, which is created by unsteady voltage, is one of the most common noises in digital communication. During the acquisition process, pictures were collected. It is possible to obtain accurate diagnosis images by removing these noises without affecting the edges and tiny features. The New Average High Noise Density Median Filter. (HNDMF) was proposed in this paper, and it operates in two steps for each pixel. Filter can decide whether the test pixels is degraded by SPN. In the first stage, a detector identifies corrupted pixels, in the second stage, an algorithm replaced by noise free processed pixel, the New average suggested Filter produced for this window. The paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. In this paper the comparison of known image denoising is discussed and a new decision based weighted median filter used to remove impulse noise. Using Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), and Structure Similarity Index Method (SSIM) metrics, the paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. A detailed simulation process is performed to ensure the betterment of the presented model on the Mini-MIAS dataset. The obtained experimental values stated that the HNDMF model has reached to a better performance with the maximum picture quality. images affected by various amounts of pretend salt and paper noise, as well as speckle noise, are calculated and provided as experimental results. According to quality metrics, the HNDMF Method produces a superior result than the existing filter method. Accurately detect and replace salt and pepper noise pixel values with mean and median value in images. The proposed method is to improve the median filter with a significant change.

Development on Repair and Reinforcement Cost Model for Bridge Life-Cycle Maintenance Cost Analysis (교량 유지관리비용 분석을 위한 대표 보수보강 비용모델 개발)

  • Sun, Jong-Wan;Lee, Dong-Yeol;Park, Kyung-Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.128-134
    • /
    • 2016
  • Estimating the repair and reinforcement (R&R) costs for each bridge member is essential for managing the life cycle of a bridge using a bridge management system (BMS). Representative members of a bridge were defined in this study, and detailed and representative R&R methods for each one were drawn in order to develop a systematic maintenance cost model that is applicable to the BMS. The unit cost for each detailed R&R method was established using the standard of estimate and historical cost data, and a systematic procedure is presented using an integration program to enable easy renewal of the R&R unit cost. Also, the average unit cost of the representative R&R methods was calculated in the form of a weighted average by considering the unit cost and application frequency of each detained R&R method. The appropriateness of the drawn average unit cost was reviewed by comparing and verifying it with the previous historical unit cost. The suggested average R&R unit cost can be used to review the validity of the required budget or the appropriateness of the R&R performance cost in the stage to establish the bridge maintenance plan. The results of this study are expected to improve the reliability of maintenance cost information and the rationality of decision making.

On the Geometric Anisotropy Inherent In Spatial Data (공간자료의 기하학적 비등방성 연구)

  • Go, Hye Ji;Park, Man Sik
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.5
    • /
    • pp.755-771
    • /
    • 2014
  • Isotropy is one of the main assumptions for the ease of spatial prediction (named kriging) based on some covariance models. A lack of isotropy (or anisotropy) in a spatial process necessitates that some additional parameters (angle and ratio) for anisotropic covariance model be obtained in order to produce a more reliable prediction. In this paper, we propose a new class of geometrically extended anisotropic covariance models expressed as a weighted average of some geometrically anisotropic models. The maximum likelihood estimation method is taken into account to estimate the parameters of our interest. We evaluate the performances of our proposal and compare it with an isotropic covariance model and a geometrically anisotropic model in simulation studies. We also employ extended geometric anisotropy to the analysis of real data.