• Title/Summary/Keyword: Sufficient dimension reduction

Search Result 38, Processing Time 0.019 seconds

ON PAIRWISE GAUSSIAN BASES AND LLL ALGORITHM FOR THREE DIMENSIONAL LATTICES

  • Kim, Kitae;Lee, Hyang-Sook;Lim, Seongan;Park, Jeongeun;Yie, Ikkwon
    • Journal of the Korean Mathematical Society
    • /
    • v.59 no.6
    • /
    • pp.1047-1065
    • /
    • 2022
  • For two dimensional lattices, a Gaussian basis achieves all two successive minima. For dimension larger than two, constructing a pairwise Gaussian basis is useful to compute short vectors of the lattice. For three dimensional lattices, Semaev showed that one can convert a pairwise Gaussian basis to a basis achieving all three successive minima by one simple reduction. A pairwise Gaussian basis can be obtained from a given basis by executing Gauss algorithm for each pair of basis vectors repeatedly until it returns a pairwise Gaussian basis. In this article, we prove a necessary and sufficient condition for a pairwise Gaussian basis to achieve the first k successive minima for three dimensional lattices for each k ∈ {1, 2, 3} by modifying Semaev's condition. Our condition directly checks whether a pairwise Gaussian basis contains the first k shortest independent vectors for three dimensional lattices. LLL is the most basic lattice basis reduction algorithm and we study how to use LLL to compute a pairwise Gaussian basis. For δ ≥ 0.9, we prove that LLL(δ) with an additional simple reduction turns any basis for a three dimensional lattice into a pairwise SV-reduced basis. By using this, we convert an LLL reduced basis to a pairwise Gaussian basis in a few simple reductions. Our result suggests that the LLL algorithm is quite effective to compute a basis with all three successive minima for three dimensional lattices.

Estimation of response reduction factor of RC frame staging in elevated water tanks using nonlinear static procedure

  • Lakhade, Suraj O.;Kumar, Ratnesh;Jaiswal, Omprakash R.
    • Structural Engineering and Mechanics
    • /
    • v.62 no.2
    • /
    • pp.209-224
    • /
    • 2017
  • Elevated water tanks are considered as important structures due to its post-earthquake requirements. Elevated water tank on reinforced concrete frame staging is widely used in India. Different response reduction factors depending on ductility of frame members are used in seismic design of frame staging. The study on appropriateness of response reduction factor for reinforced concrete tank staging is sparse in literature. In the present paper a systematic study on estimation of key components of response reduction factors is presented. By considering the various combinations of tank capacity, height of staging, seismic design level and design response reduction factors, forty-eight analytical models are developed and designed using relevant Indian codes. The minimum specified design cross section of column as per Indian code is found to be sufficient to accommodate the design steel. The strength factor and ductility factor are estimated using results of nonlinear static pushover analysis. It was observed that for seismic design category 'high' the strength factor has lesser contribution than ductility factor, whereas, opposite trend is observed for seismic design category 'low'. Further, the effects of staging height and tank capacity on strength and ductility factors for two different seismic design categories are studied. For both seismic design categories, the response reduction factors obtained from the nonlinear static analysis is higher than the code specified response reduction factors. The minimum dimension restriction of column is observed as key parameter in achieving the desired performance of the elevated water tank on frame staging.

Permutation Predictor Tests in Linear Regression

  • Ryu, Hye Min;Woo, Min Ah;Lee, Kyungjin;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.2
    • /
    • pp.147-155
    • /
    • 2013
  • To determine whether each coefficient is equal to zero or not, usual $t$-tests are a popular choice (among others) in linear regression to practitioners because all statistical packages provide the statistics and their corresponding $p$-values. Under smaller samples (especially with non-normal errors) the tests often fail to correctly detect statistical significance. We propose a permutation approach by adopting a sufficient dimension reduction methodology to overcome this deficit. Numerical studies confirm that the proposed method has potential advantages over the t-tests. In addition, data analysis is also presented.

Research on the technical development by the CAD/CAM System (CAD/CAM시스템을 이용한 기술개발에 대한 연구 (워엄기어 개발을 중심으로))

  • Jeong, Seon-Mo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.3 no.3
    • /
    • pp.40-71
    • /
    • 1986
  • By developing a computer program for the systematic design of worm gears, the design formulae and tables of AGMA, JGMA, BS and DIN are analized and compared. The computer program can be used on micro-computers. According to the input data of the reduction ratio, the center distance. the driving torque and the material as design parameters, the program calculate the most efficient worm gear dimension. The variation of the design parameters and other empirical coefficients in case of resulting an inadequate design gear dimension can be easily modified throuth the way of interactive method between the user and the monitoring system of computer. A proposal of the standardization of worm gears was made in which a standard module according to the DIN 323 standard series number was applied. For the more exact and effective calculation of the stress concentration and the deformation of gear teeth, a computer program using the boundary element method is also developed. Even the strength of the special gear shape such as Niemann's "Cavex" gear can be calculated in a short CPU-time. The most effort of this study has been layed on the developing a computer program for the correction of a tooth profile and face width which is most important design factor for an exact and wide teeth contacts under loads, especially by great and wide gears. For this purpose were investigated the tooth stiffness, the mesh interferences and the kinematics and the dynamics of gear mesh. The deflection and the deformation of the gear shaft due to the loads acting on gear and shaft were aslo considered. Some examples have shown the sufficient good status of teeth contact in which the correction of the tooth profile and face width were accomplished due to the calculated results.d results.

  • PDF

Graphical regression and model assessment in logistic model (로지스틱모형에서 그래픽을 이용한 회귀와 모형평가)

  • Kahng, Myung-Wook;Kim, Bu-Yong;Hong, Ju-Hee
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.1
    • /
    • pp.21-32
    • /
    • 2010
  • Graphical regression is a paradigm for obtaining regression information using plots without model assumptions. The general goal of this approach is to find lowdimensional sufficient summary plots without loss of important information. Model assessments using residual plots are less likely to be successful in models that are not linear. As an alternative approach, marginal model plots provide a general graphical method for assessing the model. We apply the methods of graphical regression and model assessment using marginal model plots to the logistic regression model.

Feature selection for text data via sparse principal component analysis (희소주성분분석을 이용한 텍스트데이터의 단어선택)

  • Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.501-514
    • /
    • 2023
  • When analyzing high dimensional data such as text data, if we input all the variables as explanatory variables, statistical learning procedures may suffer from over-fitting problems. Furthermore, computational efficiency can deteriorate with a large number of variables. Dimensionality reduction techniques such as feature selection or feature extraction are useful for dealing with these problems. The sparse principal component analysis (SPCA) is one of the regularized least squares methods which employs an elastic net-type objective function. The SPCA can be used to remove insignificant principal components and identify important variables from noisy observations. In this study, we propose a dimension reduction procedure for text data based on the SPCA. Applying the proposed procedure to real data, we find that the reduced feature set maintains sufficient information in text data while the size of the feature set is reduced by removing redundant variables. As a result, the proposed procedure can improve classification accuracy and computational efficiency, especially for some classifiers such as the k-nearest neighbors algorithm.

Development of Fast Posture Classification System for Table Tennis Robot (탁구 로봇을 위한 빠른 자세 분류 시스템 개발)

  • Jin, Seongho;Kwon, Yongwoo;Kim, Yoonjeong;Park, Miyoung;An, Jaehoon;Kang, Hosun;Choi, Jiwook;Lee, Inho
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.463-476
    • /
    • 2022
  • In this paper, we propose a table tennis posture classification system using a cooperative robot to develop a table tennis robot that can be trained like a real game. The most ideal table tennis robot would be a robot with a high joint driving speed and a high degree of freedom. Therefore, in this paper, we intend to use a cooperative robot with sufficient degrees of freedom to develop a robot that can be trained like a real game. However, cooperative robots have the disadvantage of slow joint driving speed. These shortcomings are expected to be overcome through quick recognition. Therefore, in this paper, we try to quickly classify the opponent's posture to overcome the slow joint driving speed. To this end, learning about dynamic postures was conducted using image data as input, and finally, three classification models were created and comparative experiments and evaluations were performed on the designated dynamic postures. In conclusion, comparative experimental data demonstrate the highest classification accuracy and fastest classification speed in classification models using MLP (Multi-Layer Perceptron), and thus demonstrate the validity of the proposed algorithm.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.