• Title/Summary/Keyword: orthogonal basis

Search Result 145, Processing Time 0.028 seconds

Multi-objective Optimization in Discrete Design Space using the Design of Experiment and the Mathematical Programming (실험계획법과 수리적방법을 이용한 이산설계 공간에서의 다목적 최적설계)

  • Lee, Dong-Woo;Baek, Seok-Heum;Lee, Kyoung-Young;Cho, Seok-Swoo;Joo, Won-Sik
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.26 no.10
    • /
    • pp.2150-2158
    • /
    • 2002
  • A recent research and development has the requirement for the optimization to shorten design time of modified or new product model and to obtain more precise engineering solution. General optimization problem must consider many conflicted objective functions simultaneously. Multi-objective optimization treats the multiple objective functions and constraints with design change. But, real engineering problem doesn't describe accurate constraint and objective function owing to the limit of representation. Therefore this study applies variance analysis on the basis of structure analysis and DOE to the vertical roller mill fur portland cement and proposed statistical design model to evaluate the effect of structural modification with design change by performing practical multi-objective optimization considering mass, stress and deflection.

Application of Response Surface Methodology and Plackett Burman Design assisted with Support Vector Machine for the Optimization of Nitrilase Production by Bacillus subtilis AGAB-2

  • Ashish Bhatt;Darshankumar Prajapati;Akshaya Gupte
    • Microbiology and Biotechnology Letters
    • /
    • v.51 no.1
    • /
    • pp.69-82
    • /
    • 2023
  • Nitrilases are a hydrolase group of enzymes that catalyzes nitrile compounds and produce industrially important organic acids. The current objective is to optimize nitrilase production using statistical methods assisted with artificial intelligence (AI) tool from novel nitrile degrading isolate. A nitrile hydrolyzing bacteria Bacillus subtilis AGAB-2 (GenBank Ascension number- MW857547) was isolated from industrial effluent waste through an enrichment culture technique. The culture conditions were optimized by creating an orthogonal design with 7 variables to investigate the effect of the significant factors on nitrilase activity. On the basis of obtained data, an AI-driven support vector machine was used for the fitted regression, which yielded new sets of predicted responses with zero mean error and reduced root mean square error. The results of the above global optimization were regarded as the theoretical optimal function conditions. Nitrilase activity of 9832 ± 15.3 U/ml was obtained under optimized conditions, which is a 5.3-fold increase in compared to unoptimized (1822 ± 18.42 U/ml). The statistical optimization method involving Plackett Burman Design and Response surface methodology in combination with an AI tool created a better response prediction model with a significant improvement in enzyme production.

Voxel-wise UV parameterization and view-dependent texture synthesis for immersive rendering of truncated signed distance field scene model

  • Kim, Soowoong;Kang, Jungwon
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.51-61
    • /
    • 2022
  • In this paper, we introduced a novel voxel-wise UV parameterization and view-dependent texture synthesis for the immersive rendering of a truncated signed distance field (TSDF) scene model. The proposed UV parameterization delegates a precomputed UV map to each voxel using the UV map lookup table and consequently, enabling efficient and high-quality texture mapping without a complex process. By leveraging the convenient UV parameterization, our view-dependent texture synthesis method extracts a set of local texture maps for each voxel from the multiview color images and separates them into a single view-independent diffuse map and a set of weight coefficients for an orthogonal specular map basis. Furthermore, the view-dependent specular maps for an arbitrary view are estimated by combining the specular weights of each source view using the location of the arbitrary and source viewpoints to generate the view-dependent textures for arbitrary views. The experimental results demonstrate that the proposed method effectively synthesizes texture for an arbitrary view, thereby enabling the visualization of view-dependent effects, such as specularity and mirror reflection.

An improved fuzzy c-means method based on multivariate skew-normal distribution for brain MR image segmentation

  • Guiyuan Zhu;Shengyang Liao;Tianming Zhan;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2082-2102
    • /
    • 2024
  • Accurate segmentation of magnetic resonance (MR) images is crucial for providing doctors with effective quantitative information for diagnosis. However, the presence of weak boundaries, intensity inhomogeneity, and noise in the images poses challenges for segmentation models to achieve optimal results. While deep learning models can offer relatively accurate results, the scarcity of labeled medical imaging data increases the risk of overfitting. To tackle this issue, this paper proposes a novel fuzzy c-means (FCM) model that integrates a deep learning approach. To address the limited accuracy of traditional FCM models, which employ Euclidean distance as a distance measure, we introduce a measurement function based on the skewed normal distribution. This function enables us to capture more precise information about the distribution of the image. Additionally, we construct a regularization term based on the Kullback-Leibler (KL) divergence of high-confidence deep learning results. This regularization term helps enhance the final segmentation accuracy of the model. Moreover, we incorporate orthogonal basis functions to estimate the bias field and integrate it into the improved FCM method. This integration allows our method to simultaneously segment the image and estimate the bias field. The experimental results on both simulated and real brain MR images demonstrate the robustness of our method, highlighting its superiority over other advanced segmentation algorithms.

Statistical Analysis of Projection-Based Face Recognition Algorithms (투사에 기초한 얼굴 인식 알고리즘들의 통계적 분석)

  • 문현준;백순화;전병민
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.5A
    • /
    • pp.717-725
    • /
    • 2000
  • Within the last several years, there has been a large number of algorithms developed for face recognition. The majority of these algorithms have been view- and projection-based algorithms. Our definition of projection is not restricted to projecting the image onto an orthogonal basis the definition is expansive and includes a general class of linear transformation of the image pixel values. The class includes correlation, principal component analysis, clustering, gray scale projection, and matching pursuit filters. In this paper, we perform a detailed analysis of this class of algorithms by evaluating them on the FERET database of facial images. In our experiments, a projection-based algorithms consists of three steps. The first step is done off-line and determines the new basis for the images. The bases is either set by the algorithm designer or is learned from a training set. The last two steps are on-line and perform the recognition. The second step projects an image onto the new basis and the third step recognizes a face in an with a nearest neighbor classifier. The classification is performed in the projection space. Most evaluation methods report algorithm performance on a single gallery. This does not fully capture algorithm performance. In our study, we construct set of independent galleries. This allows us to see how individual algorithm performance varies over different galleries. In addition, we report on the relative performance of the algorithms over the different galleries.

  • PDF

Factor Analysis for Exploratory Research in the Distribution Science Field (유통과학분야에서 탐색적 연구를 위한 요인분석)

  • Yim, Myung-Seong
    • Journal of Distribution Science
    • /
    • v.13 no.9
    • /
    • pp.103-112
    • /
    • 2015
  • Purpose - This paper aims to provide a step-by-step approach to factor analytic procedures, such as principal component analysis (PCA) and exploratory factor analysis (EFA), and to offer a guideline for factor analysis. Authors have argued that the results of PCA and EFA are substantially similar. Additionally, they assert that PCA is a more appropriate technique for factor analysis because PCA produces easily interpreted results that are likely to be the basis of better decisions. For these reasons, many researchers have used PCA as a technique instead of EFA. However, these techniques are clearly different. PCA should be used for data reduction. On the other hand, EFA has been tailored to identify any underlying factor structure, a set of measured variables that cause the manifest variables to covary. Thus, it is needed for a guideline and for procedures to use in factor analysis. To date, however, these two techniques have been indiscriminately misused. Research design, data, and methodology - This research conducted a literature review. For this, we summarized the meaningful and consistent arguments and drew up guidelines and suggested procedures for rigorous EFA. Results - PCA can be used instead of common factor analysis when all measured variables have high communality. However, common factor analysis is recommended for EFA. First, researchers should evaluate the sample size and check for sampling adequacy before conducting factor analysis. If these conditions are not satisfied, then the next steps cannot be followed. Sample size must be at least 100 with communality above 0.5 and a minimum subject to item ratio of at least 5:1, with a minimum of five items in EFA. Next, Bartlett's sphericity test and the Kaiser-Mayer-Olkin (KMO) measure should be assessed for sampling adequacy. The chi-square value for Bartlett's test should be significant. In addition, a KMO of more than 0.8 is recommended. The next step is to conduct a factor analysis. The analysis is composed of three stages. The first stage determines a rotation technique. Generally, ML or PAF will suggest to researchers the best results. Selection of one of the two techniques heavily hinges on data normality. ML requires normally distributed data; on the other hand, PAF does not. The second step is associated with determining the number of factors to retain in the EFA. The best way to determine the number of factors to retain is to apply three methods including eigenvalues greater than 1.0, the scree plot test, and the variance extracted. The last step is to select one of two rotation methods: orthogonal or oblique. If the research suggests some variables that are correlated to each other, then the oblique method should be selected for factor rotation because the method assumes all factors are correlated in the research. If not, the orthogonal method is possible for factor rotation. Conclusions - Recommendations are offered for the best factor analytic practice for empirical research.

A study on the Effects of Input Parameters on Springback Prediction Accuracy (스프링백 해석 정도 향상을 위한 입력조건에 관한 연구)

  • Han, Y.S.;Oh, S.W.;Choi, K.Y.
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2007.05a
    • /
    • pp.285-288
    • /
    • 2007
  • The use of commercial finite element analysis software to perform the entire process analysis and springback analysis has increased fast for last decade. Pamstamp2G is one of commercial software to be used widely in the world but it has still not been perfected in the springback prediction accuracy. We must select the combination of input parameters for the highest springback prediction accuracy in Pamstamp2G because springback prediction accuracy is sensitive to input parameters. Then we study the affect of input parameters to use member part for acquiring high springback prediction accuracy in Pamstamp2G. First, we choose important four parameters which are adaptive mesh level at drawing stage and cam flange stage, Gauss integration point number through the thickness and cam offset on basis of experiment. Second, we make a orthogonal array table L82[(7)] which is consist of 8 cases to be combined 4 input parameters, compare to tryout result and select main factors after analyzing affect factors of input parameters by Taguchi's method in 6 sigma. Third, we simulate after changing more detail the conditions of parameters to have big affect. At last, we find the best combination of input parameters for the highest springback prediction accuracy in Pamstamp2G. The results of the study provide the selection of input parameters to Pamstamp2G users who want to Increase the springback prediction accuracy.

  • PDF

The Three Dimensional Analysis of the Upper Body's Segments of the Elderly during Walking (보행 시 노인의 상체 움직임에 대한 3차원적 분석)

  • Kim, Hee-Su;Yoon, Hee-Joong;Ryu, Ji-Seon;Kim, Tae-Sam
    • Korean Journal of Applied Biomechanics
    • /
    • v.14 no.3
    • /
    • pp.1-15
    • /
    • 2004
  • The purpose of this study was to investigate the kinematic variables of the upper part of the body for 8 elderly men during walking. For this study, kinematic data were collected using a six-camera (240Hz) Qualisys ProReflex system. The room coordinate system was right-handed and fixed in space, with righted orthogonal segment coordinate systems defined for the head, trunk, and pelvis. Based on a rigid body model, reflective marker triads were attached on the 3 segments. Three-dimensional Cartesian coordinates for each marker were determined at the time of recording using a nonlinear transformation(NLT) technique with ProReflex software (Qualisys, Inc.). Coordinate data were low-pass filtered using a fourth-order Butterworth with cutoff frequency of 6Hz. Three-dimensional angles of the head, trunk, and pelvis segment were determined using a Cardan method. On the basis of each segment angle, angle-angle plot used to estimated the movement coordinations between segments. The conclusions were as follows; (1) During the support phase of walking, the elderly people generally kept their, head the flexional and abductional posture. Particularly, the elderly displayed little internal/external rotation. (2) The elderly people showed extensional and external rotation postures in the trunk movement. Particularly, It showed the change from adduction into abduction at the heel contact event of the stance phase. (3) The elderly people showed almost same pelvis movement from the flexion into extension, from the abduction into adduction, and from internal rotation into external rotation at the mid stance and toe off of the stance phase.

Research on Discontinuous Pulse Width Modulation Algorithm for Single-phase Voltage Source Rectifier

  • Yang, Xi-Jun;Qu, Hao;Tang, Hou-Jun;Yao, Chen;Zhang, Ning-Yun;Blaabjerg, Frede
    • Journal of international Conference on Electrical Machines and Systems
    • /
    • v.3 no.4
    • /
    • pp.433-445
    • /
    • 2014
  • Single phase voltage source converter (VSC) is an important power electronic converter (PEC), including single-phase voltage source inverter (VSI), single-phase voltage source rectifier (VSR), single-phase active power filter (APF) and single-phase grid-connection inverter (GCI). As the fundamental part of large scale PECs, single-phase VSC has a wide range of applications. In the paper, as first, on the basis of the concept of the discontinuous pulse-width modulation (DPWM) for three-phase VSC, a new DPWM of single-phase VSR is presented by means of zero-sequence component injection. Then, the transformation from stationary frame (abc) to rotating frame (dq) is designed after reconstructing the other orthogonal current by means of one order all-pass filter. Finally, the presented DPWM based single-phase VSR is established analyzed and simulated by means of MATLAB/SIMULINK. In addition, the DPWMs presented by D. Grahame Holmes and Thomas Lipo are discussed and simulated in brief. Obviously, the presented DPWM can also be used for single-phase VSI, GCI and APF. The simulation results show the validation of the above modulation algorithm, and the DPWM based single-phase VSR has reduced power loss and increased efficiency.

Design the Structure of Scaling-Wavelet Mixed Neural Network (스케일링-웨이블릿 혼합 신경회로망 구조 설계)

  • Kim, Sung-Soo;Kim, Yong-Taek;Seo, Jae-Yong;Cho, Hyun-Chan;Jeon, Hong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.6
    • /
    • pp.511-516
    • /
    • 2002
  • The neural networks may have problem such that the amount of calculation for the network learning goes too big according to the dimension of the dimension. To overcome this problem, the wavelet neural networks(WNN) which use the orthogonal basis function in the hidden node are proposed. One can compose wavelet functions as activation functions in the WNN by determining the scale and center of wavelet function. In this paper, when we compose the WNN using wavelet functions, we set a single scale function as a node function together. We intend that one scale function approximates the target function roughly, the other wavelet functions approximate it finely During the determination of the parameters, the wavelet functions can be determined by the global search for solutions suitable for the suggested problem using the genetic algorithm and finally, we use the back-propagation algorithm in the learning of the weights.