• Title/Summary/Keyword: matrix polynomial

Search Result 223, Processing Time 0.023 seconds

An Alternative Model for Determining the Optimal Fertilizer Level (수도(水稻) 적정시비량(適正施肥量) 결정(決定)에 대한 대체모형(代替模型))

  • Chang, Suk-Hwan
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.13 no.1
    • /
    • pp.21-32
    • /
    • 1980
  • Linear models, with and without site variables, have been investigated in order to develop an alternative methodology for determining optimal fertilizer levels. The resultant models are : (1) Model I is an ordinary quadratic response function formed by combining the simple response function estimated at each site in block diagonal form, and has parameters [${\gamma}^{(1)}_{m{\ell}}$], for m=1, 2, ${\cdots}$, n sites and degrees of polynomial, ${\ell}$=0, 1, 2. (2) Mode II is a multiple regression model with a set of site variables (including an intercept) repeated for each fertilizer level and the linear and quadratic terms of the fertilizer variables arranged in block diagonal form as in Model I. The parameters are equal to [${\beta}_h\;{\gamma}^{(2)}_{m{\ell}}$] for h=0, 1, 2, ${\cdots}$, k site variable, m=1, 2, ${\cdots}$ and ${\ell}$=1, 2. (3) Model III is a classical response surface model, I. e., a common quadratic polynomial model for the fertilizer variables augmented with site variables and interactions between site variables and the linear fertilizer terms. The parameters are equal to [${\beta}_h\;{\gamma}_{\ell}\;{\theta}_h$], for h=0, 1, ${\cdots}$, k, ${\ell}$=1, 2, and h'=1, 2, ${\cdots}$, k. (4) Model IV has the same basic structure as Mode I, but estimation procedure involves two stages. In stage 1, yields for each fertilizer level are regressed on the site variables and the resulting predicted yields for each site are then regressed on the fertilizer variables in stage 2. Each model has been evaluated under the assumption that Model III is the postulated true response function. Under this assumption, Models I, II and IV give biased estimators of the linear fertilizer response parameter which depend on the interaction between site variables and applied fertilizer variables. When the interaction is significant, Model III is the most efficient for calculation of optimal fertilizer level. It has been found that Model IV is always more efficient than Models I and II, with efficiency depending on the magnitude of ${\lambda}m$, the mth diagonal element of X (X' X)' X' where X is the site variable matrix. When the site variable by linear fertilizer interaction parameters are zero or when the estimated interactions are not important, it is demonstrated that Model IV can be a reasonable alternative model for calculation of optimal fertilizer level. The efficiencies of the models are compared us ing data from 256 fertilizer trials on rice conducted in Korea. Although Model III is usually preferred, the empirical results from the data analysis support the feasibility of using Model IV in practice when the estimated interaction term between measured soil organic matter and applied nitrogen is not important.

  • PDF

Continuous Speech Recognition based on Parmetric Trajectory Segmental HMM (모수적 궤적 기반의 분절 HMM을 이용한 연속 음성 인식)

  • 윤영선;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.35-44
    • /
    • 2000
  • In this paper, we propose a new trajectory model for characterizing segmental features and their interaction based upon a general framework of hidden Markov models. Each segment, a sequence of vectors, is represented by a trajectory of observed sequences. This trajectory is obtained by applying a new design matrix which includes transitional information on contiguous frames, and is characterized as a polynomial regression function. To apply the trajectory to the segmental HMM, the frame features are replaced with the trajectory of a given segment. We also propose the likelihood of a given segment and the estimation of trajectory parameters. The obervation probability of a given segment is represented as the relation between the segment likelihood and the estimation error of the trajectories. The estimation error of a trajectory is considered as the weight of the likelihood of a given segment in a state. This weight represents the probability of how well the corresponding trajectory characterize the segment. The proposed model can be regarded as a generalization of a conventional HMM and a parametric trajectory model. The experimental results are reported on the TIMIT corpus and performance is show to improve significantly over that of the conventional HMM.

  • PDF

Improving the Accuracy of 3D Object-space Data Extracted from IKONOS Satellite Images - By Improving the Accuracy of the RPC Model (IKONOS 영상으로부터 추출되는 3차원 지형자료의 정확도 향상에 관한 연구 - RPC 모델의 위치정확도 보정을 통하여)

  • 이재빈;곽태석;김용일
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.21 no.4
    • /
    • pp.301-308
    • /
    • 2003
  • This study describes the methodology that improves the accuracy of the 3D object-space data extracted from IKONOS satellite images by improving the accuracy of a RPC(Rational Polynomial Coefficient) model. For this purpose, we developed the algorithm to adjust a RPC model, and could improve the accuracy of a RPC model with this algorithm and geographically well-distributed GCPs(Ground Control Points). Furthermore, when a RPC model was adjusted with this algorithm, the effects of geographic distribution and the number of GCPs on the accuracy of the adjusted RPC model was tested. The results showed that the accuracy of the adjusted RPC model is affected more by the distribution of GCPs than by the number of GCPs. On the basis of this result, the algorithm using pseudo_GCPs was developed to improve the accuracy of a RPC model in case the distribution of GCPs was poor and the number of GCPs was not enough to adjust the RPC model. So, even if poorly distributed GCPs were used, the geographically adjusted RPC model could be obtained by using pseudo_GCPs. The less the pseudo_GCPs were used -that is, GCPs were more weighted than pseudo_GCPs in the observation matrix-, the more accurate the adjusted RPC model could be obtained, Finally, to test the validity of these algorithms developed in this study, we extracted 3D object-space coordinates using RPC models adjusted with these algorithms and a stereo pair of IKONOS satellite images, and tested the accuracy of these. The results showed that 3D object-space coordinates extracted from the adjusted RPC models was more accurate than those extracted from original RPC models. This result proves the effectiveness of the algorithms developed in this study.

Optimization Algorithm for k-opt Swap of Generalized Assignment Problem (일반화된 배정 문제의 k-opt 교환 최적화 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.151-158
    • /
    • 2023
  • The researchers entirely focused on meta-heuristic method for generalized assignment problem(GAP) that is known as NP-hard problem because of the optimal solution within polynomial time algorithm is unknown yet. On the other hand, this paper proposes a heuristic greedy algorithm with rules for finding solutions. Firstly, this paper reduces the weight matrix of original data to wij ≤ bi/l in order to n jobs(items) pack m machines(bins) with l = n/m. The maximum profit of each job was assigned to the machine for the reduced data. Secondly, the allocation was adjusted so that the sum of the weights assigned to each machine did not exceed the machine capacity. Finally, the k-opt swap optimization was performed to maximize the profit. The proposed algorithm is applied to 50 benchmarking data, and the best known solution for about 1/3 data is to solve the problem. The remaining 2/3 data showed comparable results to metaheuristic techniques. Therefore, the proposed algorithm shows the possibility that rules for finding solutions in polynomial time exist for GAP. Experiments demonstrate that it can be a P-problem from an NP-hard.

Nonlinear Characteristics of Non-Fuzzy Inference Systems Based on HCM Clustering Algorithm (HCM 클러스터링 알고리즘 기반 비퍼지 추론 시스템의 비선형 특성)

  • Park, Keon-Jun;Lee, Dong-Yoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.11
    • /
    • pp.5379-5388
    • /
    • 2012
  • In fuzzy modeling for nonlinear process, the fuzzy rules are typically formed by selection of the input variables, the number of space division and membership functions. The Generation of fuzzy rules for nonlinear processes have the problem that the number of fuzzy rules exponentially increases. To solve this problem, complex nonlinear process can be modeled by generating the fuzzy rules by means of fuzzy division of input space. Therefore, in this paper, rules of non-fuzzy inference systems are generated by partitioning the input space in the scatter form using HCM clustering algorithm. The premise parameters of the rules are determined by membership matrix by means of HCM clustering algorithm. The consequence part of the rules is represented in the form of polynomial functions and the consequence parameters of each rule are identified by the standard least-squares method. And lastly, we evaluate the performance and the nonlinear characteristics using the data widely used in nonlinear process. Through this experiment, we showed that high-dimensional nonlinear systems can be modeled by a very small number of rules.

Central Composite Design Matrix (CCDM) for Phthalocyanine Reactive Dyeing of Nylon Fiber: Process Analysis and Optimization

  • Ravikumar, K.;Kim, Byung-Soon;Son, Young-A
    • Textile Coloration and Finishing
    • /
    • v.20 no.2
    • /
    • pp.19-28
    • /
    • 2008
  • The objective of this study was to apply the statistical technique known as design of experiments to optimize the % exhaustion variables for phthalocyanine dyeing of nylon fiber. In this study, a three-factor Central Composite Rotatable Design (CCRD) was used to establish the optimum conditions for the phthalocyanine reactive dyeing of nylon fiber. Temperature, pH and liquor ratio were considered as the variable of interest. Acidic solution with higher temperature and lower liquor ratio were found to be suitable conditions for higher % exhaustion. These three variables were used as independent variables, whose effects on % exhaustion were evaluated. Significant polynomial regression models describing the changes on % exhaustion and % fixation with respect to independent variables were established with coefficient of determination, R2, greater than 0.90. Close agreement between experimental and predicted yields was obtained. Optimum conditions were obtained using surface plots and Monte Carlo simulation techniques where maximum dyeing efficiency is achieved. The significant level of both the main effects and interaction was observed by analysis of variance (ANOVA) approach. Based on the statistical analysis, the results have provided much valuable information on the relationship between response variables and independent variables. This study demonstrates that the CCRD could be efficiently applied for the empirical modeling of % exhaustion and % fixation in dyeing. It also shows that it is an economical way of obtaining the maximum amount of information in a short period of time with least number of experiments.

Distributed Data Management based on t-(v,k,1) Combinatorial Design (t-(v,k,1) 조합 디자인 기반의 데이터 분산 관리 방식)

  • Song, You-Jin;Park, Kwang-Yong;Kang, Yeon-Jung
    • The KIPS Transactions:PartC
    • /
    • v.17C no.5
    • /
    • pp.399-406
    • /
    • 2010
  • Many problems are arisen due to the weakness in the security and invasion to privacy by malicious attacker or internal users while various data services are available in ubiquitous network environment. The matter of controlling security for various contents and large capacity of data has appeared as an important issue to solve this problem. The allocation methods of Ito, Saito and Nishizeki based on traditional polynomial require all shares to restore the secret information shared. On the contrary, the secret information can be restored if the shares beyond the threshold value is collected. In addition, it has the effect of distributed DBMS operation which distributes and restores the data, especially the flexibility in realization by using parameters t,v,k in combinatorial design which has regularity in DB server and share selection. This paper discuss the construction of new share allocation method and data distribution/storage management with the application of matrix structure of t-(v,k,1) design for allocating share when using secret sharing in management scheme to solve the matter of allocating share.

Solution of randomly excited stochastic differential equations with stochastic operator using spectral stochastic finite element method (SSFEM)

  • Hussein, A.;El-Tawil, M.;El-Tahan, W.;Mahmoud, A.A.
    • Structural Engineering and Mechanics
    • /
    • v.28 no.2
    • /
    • pp.129-152
    • /
    • 2008
  • This paper considers the solution of the stochastic differential equations (SDEs) with random operator and/or random excitation using the spectral SFEM. The random system parameters (involved in the operator) and the random excitations are modeled as second order stochastic processes defined only by their means and covariance functions. All random fields dealt with in this paper are continuous and do not have known explicit forms dependent on the spatial dimension. This fact makes the usage of the finite element (FE) analysis be difficult. Relying on the spectral properties of the covariance function, the Karhunen-Loeve expansion is used to represent these processes to overcome this difficulty. Then, a spectral approximation for the stochastic response (solution) of the SDE is obtained based on the implementation of the concept of generalized inverse defined by the Neumann expansion. This leads to an explicit expression for the solution process as a multivariate polynomial functional of a set of uncorrelated random variables that enables us to compute the statistical moments of the solution vector. To check the validity of this method, two applications are introduced which are, randomly loaded simply supported reinforced concrete beam and reinforced concrete cantilever beam with random bending rigidity. Finally, a more general application, randomly loaded simply supported reinforced concrete beam with random bending rigidity, is presented to illustrate the method.

Calibration and Flight Test Results of Air Data Sensing System using Flush Pressure Ports (플러시 압력공을 사용한 대기자료 측정장치의 교정 및 비행시험 결과)

  • Lee, Chang-Ho;Park, Young-Min;Chang, Byeong-Hee;Lee, Yung-Gyo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.7
    • /
    • pp.531-538
    • /
    • 2017
  • A flush air data sensing system, which can predict flight speed, angle of attack, and angle of sideslip of the aircraft is designed and manufactured for a small UAV. Two kinds of flush pressure ports, four ports and five ports, are tapped at the same section of fuselage nose-cone. Calibration pressure data at flush ports are obtained through computations for the total aircraft by using Fluent code. Angle of attack, angle of sideslip, total pressure, and static pressure are represented with 4th-order polynomial function and calibration coefficient matrix is obtained using least square method with calibration pressure data. Flight test showed that flight speed, angle of attack, and sideslip angle predicted by four flush ports and five flush ports compared well with those by five-hole probe installed for data comparison. Especially four flush ports revealed nearly same results as those by five flush ports.

Finite element analysis of planar 4:1 contraction flow with the tensor-logarithmic formulation of differential constitutive equations

  • Kwon Youngdon
    • Korea-Australia Rheology Journal
    • /
    • v.16 no.4
    • /
    • pp.183-191
    • /
    • 2004
  • High Deborah or Weissenberg number problems in viscoelastic flow modeling have been known formidably difficult even in the inertialess limit. There exists almost no result that shows satisfactory accuracy and proper mesh convergence at the same time. However recently, quite a breakthrough seems to have been made in this field of computational rheology. So called matrix-logarithm (here we name it tensor-logarithm) formulation of the viscoelastic constitutive equations originally written in terms of the conformation tensor has been suggested by Fattal and Kupferman (2004) and its finite element implementation has been first presented by Hulsen (2004). Both the works have reported almost unbounded convergence limit in solving two benchmark problems. This new formulation incorporates proper polynomial interpolations of the log­arithm for the variables that exhibit steep exponential dependence near stagnation points, and it also strictly preserves the positive definiteness of the conformation tensor. In this study, we present an alternative pro­cedure for deriving the tensor-logarithmic representation of the differential constitutive equations and pro­vide a numerical example with the Leonov model in 4:1 planar contraction flows. Dramatic improvement of the computational algorithm with stable convergence has been demonstrated and it seems that there exists appropriate mesh convergence even though this conclusion requires further study. It is thought that this new formalism will work only for a few differential constitutive equations proven globally stable. Thus the math­ematical stability criteria perhaps play an important role on the choice and development of the suitable con­stitutive equations. In this respect, the Leonov viscoelastic model is quite feasible and becomes more essential since it has been proven globally stable and it offers the simplest form in the tensor-logarithmic formulation.