• Title/Summary/Keyword: dimension reduction method

Search Result 250, Processing Time 0.022 seconds

Analysis of Motion Response and Drift Force in Waves for the Floating-Type Ocean Monitoring Facilities (부유식 해상관측시설의 파랑중 운동 및 표류력 해석)

  • YOON Gil Su;KIM Yong Jig;KIM Dong Jun;KANG Shin Young
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.31 no.2
    • /
    • pp.202-209
    • /
    • 1998
  • A three-dimensional numerical method based on the Green's integral equation is developed to predict the motion response and drift force in waves for the ocean monitoring facilities. In this method, we use source and doublet distribution, and triangular and rectangular eliments. To eliminate the irregular frequency phenomenon, the method of improved integral equation is applied and the time-mean drift force is calculated by the method of direct pressure integration over the body surface. To conform the validity of the present numerical method, some calculations for the floating sphere are performed and it is shown that the present method provides sufficiently reliable results. As a calculation example for the real facilities, the motion response and the drift force of the vertical cylinder type ocean monitoring buoy with 2.6 m diameter and 3,77 m draft are calculated and discussed. The obtained results of motion response can be used to determine the shape and dimension of the buoy to reduce the motion response, and other data such as the effect of motion reduction due to a damper can be predictable through these motion calculations. Also, the calculation results of drift force can be used in the design procedure of mooring system to predict the maximum wave load acting on the mooring system. The present method has, in principle, no restriction in the application to the arbitrary shape facilities. So, this method can be a robust tool for the design, installation, and operation of various kinds of the floating-type ocean monitoring facilities.

  • PDF

A personalized exercise recommendation system using dimension reduction algorithms

  • Lee, Ha-Young;Jeong, Ok-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.6
    • /
    • pp.19-28
    • /
    • 2021
  • Nowadays, interest in health care is increasing due to Coronavirus (COVID-19), and a lot of people are doing home training as there are more difficulties in using fitness centers and public facilities that are used together. In this paper, we propose a personalized exercise recommendation algorithm using personalized propensity information to provide more accurate and meaningful exercise recommendation to home training users. Thus, we classify the data according to the criteria for obesity with a k-nearest neighbor algorithm using personal information that can represent individuals, such as eating habits information and physical conditions. Furthermore, we differentiate the exercise dataset by the level of exercise activities. Based on the neighborhood information of each dataset, we provide personalized exercise recommendations to users through a dimensionality reduction algorithm (SVD) among model-based collaborative filtering methods. Therefore, we can solve the problem of data sparsity and scalability of memory-based collaborative filtering recommendation techniques and we verify the accuracy and performance of the proposed algorithms.

Robust Structural Optimization Using Gauss-type Quadrature Formula (가우스구적법을 이용한 구조물의 강건최적설계)

  • Lee, Sang-Hoon;Seo, Ki-Seog;Chen, Shikui;Chen, Wei
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.33 no.8
    • /
    • pp.745-752
    • /
    • 2009
  • In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the tensor product quadrature (TPQ) formula and the univariate dimension reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.

Controlling the Depth of Microchannels Formed during Rolling-based Surface Texturing

  • Bui, Quang-Thanh;Ro, Seung-Kook;Park, Jong-Kweon
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.25 no.6
    • /
    • pp.410-420
    • /
    • 2016
  • The geometric dimension and shape of microchannels that are formed during surface texturing are widely studied for applications in flow control, and drag and friction reduction. In this research, a new method for controlling the deformation of U channels during micro-rolling-based surface texturing was developed. Since the width of the U channels is almost constant, controlling the depth is essential. A calibration procedure of initial rolling gap, and proportional-integral PI controllers and a linear interpolation have been applied simultaneously to control the depth. The PI controllers drive the position of the pre-U grooved roll as well as the rolling gap. The relationship between the channel depth and rolling gap is linearized to create a feedback signal in the depth control system. The depth of micro channels is studied on A2021 aluminum lamina surfaces. Overall, the experimental results demonstrated the feasibility of the method for controlling the depth of microchannels.

Separation-hybrid models for simulating nonstationary stochastic turbulent wind fields

  • Long Yan;Zhangjun Liu;Xinxin Ruan;Bohang Xu
    • Wind and Structures
    • /
    • v.38 no.1
    • /
    • pp.1-13
    • /
    • 2024
  • In order to effectively simulate nonstationary stochastic turbulent wind fields, four separation hybrid (SEP-H) models are proposed in the present study. Based on the assumption that the lateral turbulence component at one single-point is uncorrelated with the longitudinal and vertical turbulence components, the fluctuating wind is separated into 2nV-1D and nV1D nonstationary stochastic vector processes. The first process can be expressed as double proper orthogonal decomposition (DPOD) or proper orthogonal decomposition and spectral representation method (POD-SRM), and the second process can be expressed as POD or SRM. On this basis, four SEP-H models of nonstationary stochastic turbulent wind fields are developed. In addition, the orthogonal random variables in the SEP-H models are presented as random orthogonal functions of elementary random variables. Meanwhile, the number theoretical method (NTM) is conveniently adopted to select representative points set of the elementary random variables. The POD-FFT (Fast Fourier transform) technique is introduced in frequency to give full play to the computational efficiency of the SEP-H models. Finally, taking a long-span bridge as the engineering background, the SEP-H models are compared with the dimension-reduction DPOD (DR-DPOD) model to verify the effectiveness and superiority of the proposed models.

Integrating Discrete Wavelet Transform and Neural Networks for Prostate Cancer Detection Using Proteomic Data

  • Hwang, Grace J.;Huang, Chuan-Ching;Chen, Ta Jen;Yue, Jack C.;Ivan Chang, Yuan-Chin;Adam, Bao-Ling
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.319-324
    • /
    • 2005
  • An integrated approach for prostate cancer detection using proteomic data is presented. Due to the high-dimensional feature of proteomic data, the discrete wavelet transform (DWT) is used in the first-stage for data reduction as well as noise removal. After the process of DWT, the dimensionality is reduced from 43,556 to 1,599. Thus, each sample of proteomic data can be represented by 1599 wavelet coefficients. In the second stage, a voting method is used to select a common set of wavelet coefficients for all samples together. This produces a 987-dimension subspace of wavelet coefficients. In the third stage, the Autoassociator algorithm reduces the dimensionality from 987 to 400. Finally, the artificial neural network (ANN) is applied on the 400-dimension space for prostate cancer detection. The integrated approach is examined on 9 categories of 2-class experiments, and also 3- and 4-class experiments. All of the experiments were run 10 times of ten-fold cross-validation (i. e. 10 partitions with 100 runs). For 9 categories of 2-class experiments, the average testing accuracies are between 81% and 96%, and the average testing accuracies of 3- and 4-way classifications are 85% and 84%, respectively. The integrated approach achieves exciting results for the early detection and diagnosis of prostate cancer.

  • PDF

A Door Frame for Wind Turbine Towers Using Open-Die Forging and Ring-Rolling Method (열간자유단조와 링롤링공법을 이용한 풍력발전기용 도아프레임 개발)

  • Kwon, Yong Chul;Kang, Jong Hun;Kim, Sang Sik
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.7
    • /
    • pp.721-727
    • /
    • 2015
  • The mechanical components for wind turbines are mainly manufactured using open-die forging. This research introduces an advanced forging method to produce the door frame of the tubular wind turbine tower. The advantages of this new forging method are an increase in the raw material utilization ratio and a reduction in energy cost. In the conventional method, the door frame is hot forged with a hydraulic press and amounts of material are machined out because of the shape difference between the forged and final machine products. The proposed forging method is composed of hot forging and ring rolling processes to increase the material utilization ratio. The effectiveness of this new forging method is deeply related to the ring rolled blank dimension before the final forging. To get the optimal ring rolled blank, forged shape prediction using the finite element analysis method was applied. The forged dimensions produced by the new forging method were verified through the first article production.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Improved Feature Descriptor Extraction and Matching Method for Efficient Image Stitching on Mobile Environment (모바일 환경에서 효율적인 영상 정합을 위한 향상된 특징점 기술자 추출 및 정합 기법)

  • Park, Jin-Yang;Ahn, Hyo Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.39-46
    • /
    • 2013
  • Recently, the mobile industries grow up rapidly and their performances are improved. So the usage of mobile devices is increasing in our life. Also mobile devices equipped with a high-performance camera, so the image stitching can carry out on the mobile devices instead of the desktop. However the mobile devices have limited hardware to perform the image stitching which has a lot of computational complexity. In this paper, we have proposed improved feature descriptor extraction and matching method for efficient image stitching on mobile environment. Our method can reduce computational complexity using extension of orientation window and reduction of dimension feature descriptor when feature descriptor is generated. In addition, the computational complexity of image stitching is reduced through the classification of matching points. In our results, our method makes to improve the computational time of image stitching than the previous method. Therefore our method is suitable for the mobile environment and also that method can make natural-looking stitched image.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.