• Title/Summary/Keyword: hybrid empirical method

Search Result 52, Processing Time 0.024 seconds

DETACHED EDDY SIMULATION OF BASE FLOW IN SUPERSONIC MAINSTREAM (초음속 유동장에서 기저 유동의 Detached Eddy Simulation)

  • Shin, J.R.;Won, S.H.;Choi, J.Y.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.104-110
    • /
    • 2008
  • Detached Eddy Simulation (DES) is applied to an axisymmetric base flow at supersonic mainstream. DES is a hybrid approach to modeling turbulence that combines the best features of the Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) approaches. In the Reynolds-averaged mode, the model is currently based on either the Spalart-Allmaras (S-A) turbulence model. In the large eddy simulation mode, it is based on the Smagorinski subgrid scale model. Accurate predictions of the base flowfield and base pressure are successfully achieved by using the DES methodology with less computational cost than that of pure LES and monotone integrated large-eddy simulation (MILES) approaches. The DES accurately resolves the physics of unsteady turbulent motions, such as shear layer rollup, large-eddy motions in the downstream region, small-eddy motions inside the recirculating region. Comparison of the results shows that it is necessary to resolve approaching boundary layers and free shear-layer velocity profiles from the base edge correctly for the accurate prediction of base flows. The consideration of an empirical constant CDES for a compressible flow analysis may suggest that the optimal value of empirical constant CDES may be larger in the flows with strong compressibility than in incompressible flows.

  • PDF

A Coherent Algorithm for Noise Revocation of Multispectral Images by Fast HD-NLM and its Method Noise Abatement

  • Hegde, Vijayalaxmi;Jagadale, Basavaraj N.;Naragund, Mukund N.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.556-564
    • /
    • 2021
  • Numerous spatial and transform-domain-based conventional denoising algorithms struggle to keep critical and minute structural features of the image, especially at high noise levels. Although neural network approaches are effective, they are not always reliable since they demand a large quantity of training data, are computationally complicated, and take a long time to construct the model. A new framework of enhanced hybrid filtering is developed for denoising color images tainted by additive white Gaussian Noise with the goal of reducing algorithmic complexity and improving performance. In the first stage of the proposed approach, the noisy image is refined using a high-dimensional non-local means filter based on Principal Component Analysis, followed by the extraction of the method noise. The wavelet transform and SURE Shrink techniques are used to further culture this method noise. The final denoised image is created by combining the results of these two steps. Experiments were carried out on a set of standard color images corrupted by Gaussian noise with multiple standard deviations. Comparative analysis of empirical outcome indicates that the proposed method outperforms leading-edge denoising strategies in terms of consistency and performance while maintaining the visual quality. This algorithm ensures homogeneous noise reduction, which is almost independent of noise variations. The power of both the spatial and transform domains is harnessed in this multi realm consolidation technique. Rather than processing individual colors, it works directly on the multispectral image. Uses minimal resources and produces superior quality output in the optimal execution time.

Prediction of Thermal Decomposition Temperature of Polymers Using QSPR Methods

  • Ajloo, Davood;Sharifian, Ali;Behniafar, Hossein
    • Bulletin of the Korean Chemical Society
    • /
    • v.29 no.10
    • /
    • pp.2009-2016
    • /
    • 2008
  • The relationship between thermal decomposition temperature and structure of a new data set of eighty monomers of different polymers were studied by multiple linear regression (MLR). The stepwise method was used in order to variable selection. The best descriptors were selected from over 1400 descriptors including; topological, geometrical, electronic and hybrid descriptors. The effect of number of descriptors on the correlation coefficient (R) and F-ratio were considered. Two models were suggested, one model having four descriptors ($R^2$ = 0.894, $Q^2_{cv}$ = 0.900, F = 172.1) and other model involving 13 descriptors ($R^2$ = 0.956, $Q^2_{cv}$ = 0.956, F = 125.4).

Study on Table Design that Used Harmonization of Solidwood and Acrylic (원목(Solidwood)과 아크릴(Acrylic)의 접합을 이용한 테이블 디자인 연구)

  • We, Jin-Seok;Yoon, Yeoh-Hang
    • Journal of the Korea Furniture Society
    • /
    • v.24 no.2
    • /
    • pp.140-147
    • /
    • 2013
  • In the latest furniture design field, we can find mass production with concept of heterogeneity that is made by harmonizing existing object with other object, which has new value. In order to follow the trend in the reality, various design process method that applied concept of Hybrid is attempted by flexibly combining or changing heterogeneous materials, formations or functions such as combination of new material and wood or IT. However, not like other design fields, its research range in furniture design is limited. This study is conducted in order to overcome and supplement problems that are made when these different materials are combined, such as faults or cracks made due to difference of expansion and contraction coefficient, lack of intensity and change of formation due to external temperature and humidity. Panels that are combined for this study were verified materials that have passed environmental adaptation test throughout the period of 1 year and 2 months, which will be made into a table. By doing this, this study will be an empirical study that establish concept of furniture made with acrylic and provides manufacturing method of combining wood and acrylic. Finally it proposed a new furniture design method that follows the trend by researching new materials with the new concept.

  • PDF

Efficient Learning Algorithm using Structural Hybrid of Multilayer Neural Networks and Gaussian Potential Function Networks (다층 신경회로망과 가우시안 포텐샬 함수 네트워크의 구조적 결합을 이용한 효율적인 학습 방법)

  • 박상봉;박래정;박철훈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.12
    • /
    • pp.2418-2425
    • /
    • 1994
  • Although the error backpropagation(EBP) algorithm based on the gradient descent method is a widely-used learning algorithm of neural networks, learning sometimes takes a long time to acquire accuracy. This paper develops a novel learning method to alleviate the problems of EBP algorithm such as local minima, slow speed, and size of structure and thus to improve performance by adopting other new networks. Gaussian Potential Function networks(GPFN), in parallel with multilayer neural networks. Empirical simulations show the efficacy of the proposed algorithm in function approximation, which enables us to train networks faster with the better generalization capabilities.

  • PDF

Goodness-of-fit tests for randomly censored Weibull distributions with estimated parameters

  • Kim, Namhyun
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.5
    • /
    • pp.519-531
    • /
    • 2017
  • We consider goodness-of-fit test statistics for Weibull distributions when data are randomly censored and the parameters are unknown. Koziol and Green (Biometrika, 63, 465-474, 1976) proposed the $Cram\acute{e}r$-von Mises statistic's randomly censored version for a simple hypothesis based on the Kaplan-Meier product limit of the distribution function. We apply their idea to the other statistics based on the empirical distribution function such as the Kolmogorov-Smirnov and Liao and Shimokawa (Journal of Statistical Computation and Simulation, 64, 23-48, 1999) statistics. The latter is a hybrid of the Kolmogorov-Smirnov, $Cram\acute{e}r$-von Mises, and Anderson-Darling statistics. These statistics as well as the Koziol-Green statistic are considered as test statistics for randomly censored Weibull distributions with estimated parameters. The null distributions depend on the estimation method since the test statistics are not distribution free when the parameters are estimated. Maximum likelihood estimation and the graphical plotting method with the least squares are considered for parameter estimation. A simulation study enables the Liao-Shimokawa statistic to show a relatively high power in many alternatives; however, the null distribution heavily depends on the parameter estimation. Meanwhile, the Koziol-Green statistic provides moderate power and the null distribution does not significantly change upon the parameter estimation.

A new methodology of the development of seismic fragility curves

  • Lee, Young-Joo;Moon, Do-Soo
    • Smart Structures and Systems
    • /
    • v.14 no.5
    • /
    • pp.847-867
    • /
    • 2014
  • There are continuous efforts to mitigate structural losses from earthquakes and manage risk through seismic risk assessment; seismic fragility curves are widely accepted as an essential tool of such efforts. Seismic fragility curves can be classified into four groups based on how they are derived: empirical, judgmental, analytical, and hybrid. Analytical fragility curves are the most widely used and can be further categorized into two subgroups, depending on whether an analytical function or simulation method is used. Although both methods have shown decent performances for many seismic fragility problems, they often oversimplify the given problems in reliability or structural analyses owing to their built-in assumptions. In this paper, a new method is proposed for the development of seismic fragility curves. Integration with sophisticated software packages for reliability analysis (FERUM) and structural analysis (ZEUS-NL) allows the new method to obtain more accurate seismic fragility curves for less computational cost. Because the proposed method performs reliability analysis using the first-order reliability method, it provides component probabilities as well as useful byproducts and allows further fragility analysis at the system level. The new method was applied to a numerical example of a 2D frame structure, and the results were compared with those by Monte Carlo simulation. The method was found to generate seismic fragility curves more accurately and efficiently. Also, the effect of system reliability analysis on the development of seismic fragility curves was investigated using the given numerical example and its necessity was discussed.

A comparison of three performance-based seismic design methods for plane steel braced frames

  • Kalapodis, Nicos A.;Papagiannopoulos, George A.;Beskos, Dimitri E.
    • Earthquakes and Structures
    • /
    • v.18 no.1
    • /
    • pp.27-44
    • /
    • 2020
  • This work presents a comparison of three performance-based seismic design methods (PBSD) as applied to plane steel frames having eccentric braces (EBFs) and buckling restrained braces (BRBFs). The first method uses equivalent modal damping ratios (ξk), referring to an equivalent multi-degree-of-freedom (MDOF) linear system, which retains the mass, the elastic stiffness and responds in the same way as the original non-linear MDOF system. The second method employs modal strength reduction factors (${\bar{q}}_k$) resulting from the corresponding modal damping ratios. Contrary to the behavior factors of code based design methods, both ξk and ${\bar{q}}_k$ account for the first few modes of significance and incorporate target deformation metrics like inter-storey drift ratio (IDR) and local ductility as well as structural characteristics like structural natural period, and soil types. Explicit empirical expressions of ξk and ${\bar{q}}_k$, recently presented by the present authors elsewhere, are also provided here for reasons of completeness and easy reference. The third method, developed here by the authors, is based on a hybrid force/displacement (HFD) seismic design scheme, since it combines the force-base design (FBD) method with the displacement-based design (DBD) method. According to this method, seismic design is accomplished by using a behavior factor (qh), empirically expressed in terms of the global ductility of the frame, which takes into account both non-structural and structural deformation metrics. These expressions for qh are obtained through extensive parametric studies involving non-linear dynamic analysis (NLDA) of 98 frames, subjected to 100 far-fault ground motions that correspond to four soil types of Eurocode 8. Furthermore, these factors can be used in conjunction with an elastic acceleration design spectrum for seismic design purposes. Finally, a comparison among the above three seismic design methods and the Eurocode 8 method is conducted with the aid of non-linear dynamic analyses via representative numerical examples, involving plane steel EBFs and BRBFs.

A Hybrid Under-sampling Approach for Better Bankruptcy Prediction (부도예측 개선을 위한 하이브리드 언더샘플링 접근법)

  • Kim, Taehoon;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.173-190
    • /
    • 2015
  • The purpose of this study is to improve bankruptcy prediction models by using a novel hybrid under-sampling approach. Most prior studies have tried to enhance the accuracy of bankruptcy prediction models by improving the classification methods involved. In contrast, we focus on appropriate data preprocessing as a means of enhancing accuracy. In particular, we aim to develop an effective sampling approach for bankruptcy prediction, since most prediction models suffer from class imbalance problems. The approach proposed in this study is a hybrid under-sampling method that combines the k-Reverse Nearest Neighbor (k-RNN) and one-class support vector machine (OCSVM) approaches. k-RNN can effectively eliminate outliers, while OCSVM contributes to the selection of informative training samples from majority class data. To validate our proposed approach, we have applied it to data from H Bank's non-external auditing companies in Korea, and compared the performances of the classifiers with the proposed under-sampling and random sampling data. The empirical results show that the proposed under-sampling approach generally improves the accuracy of classifiers, such as logistic regression, discriminant analysis, decision tree, and support vector machines. They also show that the proposed under-sampling approach reduces the risk of false negative errors, which lead to higher misclassification costs.

Recommender System based on Product Taxonomy and User's Tendency (상품구조 및 사용자 경향성에 기반한 추천 시스템)

  • Lim, Heonsang;Kim, Yong Soo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.36 no.2
    • /
    • pp.74-80
    • /
    • 2013
  • In this study, a novel and flexible recommender system was developed, based on product taxonomy and usage patterns of users. The proposed system consists of the following four steps : (i) estimation of the product-preference matrix, (ii) construction of the product-preference matrix, (iii) estimation of the popularity and similarity levels for sought-after products, and (iv) recommendation of a products for the user. The product-preference matrix for each user is estimated through a linear combination of clicks, basket placements, and purchase statuses. Then the preference matrix of a particular genre is constructed by computing the ratios of the number of clicks, basket placements, and purchases of a product with respect to the total. The popularity and similarity levels of a user's clicked product are estimated with an entropy index. Based on this information, collaborative and content-based filtering is used to recommend a product to the user. To assess the effectiveness of the proposed approach, an empirical study was conducted by constructing an experimental e-commerce site. Our results clearly showed that the proposed hybrid method is superior to conventional methods.