• Title/Summary/Keyword: Case Generalization

Search Result 190, Processing Time 0.026 seconds

ON THE κ-REGULAR SEQUENCES AND THE GENERALIZATION OF F-MODULES

  • Ahmadi-Amoli, Khadijeh;Sanaei, Navid
    • Journal of the Korean Mathematical Society
    • /
    • v.49 no.5
    • /
    • pp.1083-1096
    • /
    • 2012
  • For a given ideal I of a Noetherian ring R and an arbitrary integer ${\kappa}{\geq}-1$, we apply the concept of ${\kappa}$-regular sequences and the notion of ${\kappa}$-depth to give some results on modules called ${\kappa}$-Cohen Macaulay modules, which in local case, is exactly the ${\kappa}$-modules (as a generalization of f-modules). Meanwhile, we give an expression of local cohomology with respect to any ${\kappa}$-regular sequence in I, in a particular case. We prove that the dimension of homology modules of the Koszul complex with respect to any ${\kappa}$-regular sequence is at most ${\kappa}$. Therefore homology modules of the Koszul complex with respect to any filter regular sequence has finite length.

Seismic response control of buildings with force saturation constraints

  • Ubertini, Filippo;Materazzi, A. Luigi
    • Smart Structures and Systems
    • /
    • v.12 no.2
    • /
    • pp.157-179
    • /
    • 2013
  • We present an approach, based on the state dependent Riccati equation, for designing non-collocated seismic response control strategies for buildings accounting for physical constraints, with particular attention to force saturation. We consider both cases of active control using general actuators and semi-active control using magnetorheological dampers. The formulation includes multi control devices, acceleration feedback and time delay compensation. In the active case, the proposed approach is a generalization of the classic linear quadratic regulator, while, in the semi-active case, it represents a novel generalization of the well-established modified clipped optimal approach. As discussed in the paper, the main advantage of the proposed approach with respect to existing strategies is that it allows to naturally handle a broad class of non-linearities as well as different types of control constraints, not limited to force saturation but also including, for instance, displacement limitations. Numerical results on a typical building benchmark problem demonstrate that these additional features are achieved with essentially the same control effectiveness of existing saturation control strategies.

Improvement of generalization of linear model through data augmentation based on Central Limit Theorem (데이터 증가를 통한 선형 모델의 일반화 성능 개량 (중심극한정리를 기반으로))

  • Hwang, Doohwan
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.19-31
    • /
    • 2022
  • In Machine learning, we usually divide the entire data into training data and test data, train the model using training data, and use test data to determine the accuracy and generalization performance of the model. In the case of models with low generalization performance, the prediction accuracy of newly data is significantly reduced, and the model is said to be overfit. This study is about a method of generating training data based on central limit theorem and combining it with existed training data to increase normality and using this data to train models and increase generalization performance. To this, data were generated using sample mean and standard deviation for each feature of the data by utilizing the characteristic of central limit theorem, and new training data was constructed by combining them with existed training data. To determine the degree of increase in normality, the Kolmogorov-Smirnov normality test was conducted, and it was confirmed that the new training data showed increased normality compared to the existed data. Generalization performance was measured through differences in prediction accuracy for training data and test data. As a result of measuring the degree of increase in generalization performance by applying this to K-Nearest Neighbors (KNN), Logistic Regression, and Linear Discriminant Analysis (LDA), it was confirmed that generalization performance was improved for KNN, a non-parametric technique, and LDA, which assumes normality between model building.

Performance Evaluation of the Extractiojn Method of Representative Keywords by Fuzzy Inference (퍼지추론 기반 대표 키워드 추출방법의 성능 평가)

  • Rho Sun-Ok;Kim Byeong Man;Oh Sang Yeop;Lee Hyun Ah
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.10 no.1
    • /
    • pp.28-37
    • /
    • 2005
  • In our previous works, we suggested a method that extracts representative keywords from a few positive documents and assigns weights to them. To show the usefulness of the method, in this paper, we evaluate the performance of a famous classification algorithm called GIS(Generalized Instance Set) when it is combined with our method. In GIS algorithm, generalized instances are built from learning documents by a generalization function and then the K-NN algorithm is applied to them. Here, our method is used as a generalization function. For comparative works, Rocchio and Widrow-Hoff algorithms are also used as a generalization function. Experimental results show that our method is better than the others for the case that only positive documents are considered, but not when negative documents are considered together.

  • PDF

A case-by-case version of CB statistic in biased estimation

  • Ahn, Byoung Jin
    • Journal of Korean Society for Quality Management
    • /
    • v.19 no.2
    • /
    • pp.40-51
    • /
    • 1991
  • The $C_B$ statistic, a generalization of Mallows's $C_L$ statistic, is developed to determine the shrinkage parameter. Since not all cases in a data set play an equal role in forming $C_B$, a subdivision of $C_B$ into individual components for each case is developed. This subdivision is useful both as an aid in understanding $C_B$ and as a diagnostic procedure.

  • PDF

A study on the generalization for Euclidean proof of the Pythagorean theorem (피타고라스 정리의 유클리드 증명에 관한 일반화)

  • Chung, Young Woo;Kim, Boo Yoon;Kim, Dong Young;Ryu, Dong Min;Park, Ju Hyung;Jang, Min Je
    • East Asian mathematical journal
    • /
    • v.31 no.4
    • /
    • pp.459-481
    • /
    • 2015
  • In this study, we investigated whether the theorem is established even if we replace a 'square' element in the Euclidean proof of the Pythagorean theorem with different figures. At this time, we used different figures as equilateral, isosceles triangle, (mutant) a right triangle, a rectangle, a parallelogram, and any similar figures. Pythagorean theorem implies a relationship between the three sides of a right triangle. However, the procedure of Euclidean proof is discussed in relation between the areas of the square, which each edge is the length of each side of a right triangle. In this study, according to the attached figures, we found that the Pythagorean theorem appears in the following three cases, that is, the relationship between the sides, the relationship between the areas, and one case that do not appear in the previous two cases directly. In addition, we recognized the efficiency of Euclidean proof attached the square. This proving activity requires a mathematical process, and a generalization of this process is a good material that can experience the diversity and rigor at the same time.

A Study on Training Ensembles of Neural Networks - A Case of Stock Price Prediction (신경망 학습앙상블에 관한 연구 - 주가예측을 중심으로 -)

  • 이영찬;곽수환
    • Journal of Intelligence and Information Systems
    • /
    • v.5 no.1
    • /
    • pp.95-101
    • /
    • 1999
  • In this paper, a comparison between different methods to combine predictions from neural networks will be given. These methods are bagging, bumping, and balancing. Those are based on the analysis of the ensemble generalization error into an ambiguity term and a term incorporating generalization performances of individual networks. Neural Networks and AI machine learning models are prone to overfitting. A strategy to prevent a neural network from overfitting, is to stop training in early stage of the learning process. The complete data set is spilt up into a training set and a validation set. Training is stopped when the error on the validation set starts increasing. The stability of the networks is highly dependent on the division in training and validation set, and also on the random initial weights and the chosen minimization procedure. This causes early stopped networks to be rather unstable: a small change in the data or different initial conditions can produce large changes in the prediction. Therefore, it is advisable to apply the same procedure several times starting from different initial weights. This technique is often referred to as training ensembles of neural networks. In this paper, we presented a comparison of three statistical methods to prevent overfitting of neural network.

  • PDF

Factorization Models and Other Representation of Independence

  • Lee, Yong-Goo
    • Journal of the Korean Statistical Society
    • /
    • v.19 no.1
    • /
    • pp.45-53
    • /
    • 1990
  • Factorization models are a generalization of hierarchical loglinear models which apply equally to discrete and continuous distributions. In regular (strictly positive) cases the intersection of two factorization models is another factorization model whose representation is obtained by a simple algorithm. Failure of this result in an irregular case is related to a theorem of Basu on ancillary statistics.

  • PDF