• Title/Summary/Keyword: Weight vector extraction

Search Result 18, Processing Time 0.023 seconds

Fault Diagnosis of Wind Power Converters Based on Compressed Sensing Theory and Weight Constrained AdaBoost-SVM

  • Zheng, Xiao-Xia;Peng, Peng
    • Journal of Power Electronics
    • /
    • v.19 no.2
    • /
    • pp.443-453
    • /
    • 2019
  • As the core component of transmission systems, converters are very prone to failure. To improve the accuracy of fault diagnosis for wind power converters, a fault feature extraction method combined with a wavelet transform and compressed sensing theory is proposed. In addition, an improved AdaBoost-SVM is used to diagnose wind power converters. The three-phase output current signal is selected as the research object and is processed by the wavelet transform to reduce the signal noise. The wavelet approximation coefficients are dimensionality reduced to obtain measurement signals based on the theory of compressive sensing. A sparse vector is obtained by the orthogonal matching pursuit algorithm, and then the fault feature vector is extracted. The fault feature vectors are input to the improved AdaBoost-SVM classifier to realize fault diagnosis. Simulation results show that this method can effectively realize the fault diagnosis of the power transistors in converters and improve the precision of fault diagnosis.

Fault Detection of Unbalanced Cycle Signal Data Using SOM-based Feature Signal Extraction Method (SOM기반 특징 신호 추출 기법을 이용한 불균형 주기 신호의 이상 탐지)

  • Kim, Song-Ee;Kang, Ji-Hoon;Park, Jong-Hyuck;Kim, Sung-Shick;Baek, Jun-Geol
    • Journal of the Korea Society for Simulation
    • /
    • v.21 no.2
    • /
    • pp.79-90
    • /
    • 2012
  • In this paper, a feature signal extraction method is proposed in order to enhance the low performance of fault detection caused by unbalanced data which denotes the situations when severe disparity exists between the numbers of class instances. Most of the cyclic signals gathered during the process are recognized as normal, while only a few signals are regarded as fault; the majorities of cyclic signals data are unbalanced data. SOM(Self-Organizing Map)-based feature signal extraction method is considered to fix the adverse effects caused by unbalanced data. The weight neurons, mapped to the every node of SOM grid, are extracted as the feature signals of both class data which are used as a reference data set for fault detection. kNN(k-Nearest Neighbor) and SVM(Support Vector Machine) are considered to make fault detection models with comparisons to Hotelling's $T^2$ Control Chart, the most widely used method for fault detection. Experiments are conducted by using simulated process signals which resembles the frequent cyclic signals in semiconductor manufacturing.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Optimized patch feature extraction using CNN for emotion recognition (감정 인식을 위해 CNN을 사용한 최적화된 패치 특징 추출)

  • Irfan Haider;Aera kim;Guee-Sang Lee;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.510-512
    • /
    • 2023
  • In order to enhance a model's capability for detecting facial expressions, this research suggests a pipeline that makes use of the GradCAM component. The patching module and the pseudo-labeling module make up the pipeline. The patching component takes the original face image and divides it into four equal parts. These parts are then each input into a 2Dconvolutional layer to produce a feature vector. Each picture segment is assigned a weight token using GradCAM in the pseudo-labeling module, and this token is then merged with the feature vector using principal component analysis. A convolutional neural network based on transfer learning technique is then utilized to extract the deep features. This technique applied on a public dataset MMI and achieved a validation accuracy of 96.06% which is showing the effectiveness of our method.

Orthonormal Polynomial based Optimal EEG Feature Extraction for Motor Imagery Brain-Computer Interface

  • Chum, Pharino;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.793-798
    • /
    • 2012
  • In this paper, we explored the new method for extracting feature from the electroencephalography (EEG) signal based on linear regression technique with the orthonormal polynomial bases. At first, EEG signals from electrodes around motor cortex were selected and were filtered in both spatial and temporal filter using band pass filter for alpha and beta rhymic band which considered related to the synchronization and desynchonization of firing neurons population during motor imagery task. Signal from epoch length 1s were fitted into linear regression with Legendre polynomials bases and extract the linear regression weight as final features. We compared our feature to the state of art feature, power band feature in binary classification using support vector machine (SVM) with 5-fold cross validations for comparing the classification accuracy. The result showed that our proposed method improved the classification accuracy 5.44% in average of all subject over power band features in individual subject study and 84.5% of classification accuracy with forward feature selection improvement.

Automatic Extraction of UV patterns for Paper Money Inspection (지폐검사를 위한 UV 패턴의 자동추출)

  • Lee, Geon-Ho;Park, Tae-Hyoung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.365-371
    • /
    • 2011
  • Most recently issued paper money includes security patterns that can be only identified by ultra violet (UV) illuminations. We propose an automatic extraction method of UV patterns for paper money inspection systems. The image acquired by camera and UV illumination is transformed to input data through preprocessing. And then, the Gaussian mixture model (GMM) and split-and-merge expectation maximization (SMEM) algorithm are applied to segment the image represented by input data. In order to extract the UV pattern from the segmented image, we develop a criterion using the area of covariance vector and the weight value. The experimental results on various paper money are presented to verify the usefulness of the proposed method.

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen;Kwon, Goo-Rak
    • Journal of Multimedia Information System
    • /
    • v.9 no.3
    • /
    • pp.177-182
    • /
    • 2022
  • A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.

3D Mesh Watermarking Using CEGI (CEGI를 이용한 3D 메쉬 워터마킹)

  • 이석환;김태수;김승진;권기룡;이건일
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.472-484
    • /
    • 2004
  • We proposed 3D mesh watermarking algorithm using CEGI distribution. In the proposed algorithm, we divide a 3D mesh of VRML data into 6 patches using distance measure and embed the same watermark bits into the normal vector direction of meshes that mapped into the cells of each patch that have the large magnitude of complex weight of CEGI. The watermark can be extracted based on the known center point of each patch and order information of cell. In an attacked model by affine transformation, we accomplish the realignment process before the extraction of the watermark. Experiment results exhibited the proposed algorithm is robust by extracting watermark bit for geometrical and topological deformed models.

Modified Gram-Schmidt Algorithm Using Equivalent Wiener-Hopf Equation (등가의 Wiener-Hopf 방정식을 이용한 수정된 Gram-Schmidt 알고리즘)

  • Ahn, Bong-Man;Hwang, Jee-Won;Cho, Ju-Phil
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.7C
    • /
    • pp.562-568
    • /
    • 2008
  • This paper proposes the scheme which obtain the coefficients of TDL filter and two normalization algorithms among methods which get solution of equivalent Wiener-Hopf Equation in Gram-Schmidt algorithm. Compared to the conventional NLMS algorithm, normalizes with sum of power of inputs, the presented algorithms normalize using sums of eigenvalues. Using computer simulation, we perform an system identification in an unstable environment where two poles are located in near position outside unit circle. Consequently, the proposed algorithms get the coefficients of TDL filter in Gram-Schmidt algorithm recursively and show better convergence performance than conventional NLMS algorithm.

Real-time comprehensive image processing system for detecting concrete bridges crack

  • Lin, Weiguo;Sun, Yichao;Yang, Qiaoning;Lin, Yaru
    • Computers and Concrete
    • /
    • v.23 no.6
    • /
    • pp.445-457
    • /
    • 2019
  • Cracks are an important distress of concrete bridges, and may reduce the life and safety of bridges. However, the traditional manual crack detection means highly depend on the experience of inspectors. Furthermore, it is time-consuming, expensive, and often unsafe when inaccessible position of bridge is to be assessed, such as viaduct pier. To solve this question, the real-time automatic crack detecting system with unmanned aerial vehicle (UAV) become a choice. This paper designs a new automatic detection system based on real-time comprehensive image processing for bridge crack. It has small size, light weight, low power consumption and can be carried on a small UAV for real-time data acquisition and processing. The real-time comprehensive image processing algorithm used in this detection system combines the advantage of connected domain area, shape extremum, morphology and support vector data description (SVDD). The performance and validity of the proposed algorithm and system are verified. Compared with other detection method, the proposed system can effectively detect cracks with high detection accuracy and high speed. The designed system in this paper is suitable for practical engineering applications.