• 제목/요약/키워드: Weight vector extraction

검색결과 18건 처리시간 0.023초

Fault Diagnosis of Wind Power Converters Based on Compressed Sensing Theory and Weight Constrained AdaBoost-SVM

  • Zheng, Xiao-Xia;Peng, Peng
    • Journal of Power Electronics
    • /
    • 제19권2호
    • /
    • pp.443-453
    • /
    • 2019
  • As the core component of transmission systems, converters are very prone to failure. To improve the accuracy of fault diagnosis for wind power converters, a fault feature extraction method combined with a wavelet transform and compressed sensing theory is proposed. In addition, an improved AdaBoost-SVM is used to diagnose wind power converters. The three-phase output current signal is selected as the research object and is processed by the wavelet transform to reduce the signal noise. The wavelet approximation coefficients are dimensionality reduced to obtain measurement signals based on the theory of compressive sensing. A sparse vector is obtained by the orthogonal matching pursuit algorithm, and then the fault feature vector is extracted. The fault feature vectors are input to the improved AdaBoost-SVM classifier to realize fault diagnosis. Simulation results show that this method can effectively realize the fault diagnosis of the power transistors in converters and improve the precision of fault diagnosis.

SOM기반 특징 신호 추출 기법을 이용한 불균형 주기 신호의 이상 탐지 (Fault Detection of Unbalanced Cycle Signal Data Using SOM-based Feature Signal Extraction Method)

  • 김송이;강지훈;박종혁;김성식;백준걸
    • 한국시뮬레이션학회논문지
    • /
    • 제21권2호
    • /
    • pp.79-90
    • /
    • 2012
  • 본 연구는 공정신호가 불균형 데이터인 경우 이상 탐지 알고리즘의 성능 개선을 위한 특징 신호 추출 기법을 제안한다. 불균형 데이터란 범주 구분 문제에서 하나의 범주의 속하는 데이터의 비율이 다른 범주의 데이터에 비해 크게 차이나 이상 탐지성능이 크게 저하되는 경우를 의미한다. 공정이 운영되는 경우 얻을 수 있는 이상 신호의 수는 정상 신호에 비해 매우 적기에 이러한 문제를 해결하여 이상 탐지 기법을 적용하는 것은 매우 중요하다. 불균형 문제 해결을 위해 SOM(Self-Organizing Map) 알고리즘을 이용하여 각 노드에 대응되는 가중치를 특징 신호로 간주하여 정상 데이터와 이상 데이터의 비율을 맞춘다. 특징 신호 데이터 집단의 이상 탐지를 위해 클래스 분류 기법인 kNN(k-Nearest Neighbor)과 SVM(Support Vector Machine)을 적용하여 이를 공정 신호 이상탐지를 위해 주로 사용하는 Hotelling's $T^2$ 관리도와 성능을 비교한다. 반도체 공정에서 발생한다고 알려진 공정 신호를 모사하여 신호 알고리즘 성능의 우수성을 검증한다.

키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법 (A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model)

  • 조원진;노상규;윤지영;박진수
    • Asia pacific journal of information systems
    • /
    • 제21권1호
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

감정 인식을 위해 CNN을 사용한 최적화된 패치 특징 추출 (Optimized patch feature extraction using CNN for emotion recognition)

  • 하이더 이르판;김애라;이귀상;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.510-512
    • /
    • 2023
  • In order to enhance a model's capability for detecting facial expressions, this research suggests a pipeline that makes use of the GradCAM component. The patching module and the pseudo-labeling module make up the pipeline. The patching component takes the original face image and divides it into four equal parts. These parts are then each input into a 2Dconvolutional layer to produce a feature vector. Each picture segment is assigned a weight token using GradCAM in the pseudo-labeling module, and this token is then merged with the feature vector using principal component analysis. A convolutional neural network based on transfer learning technique is then utilized to extract the deep features. This technique applied on a public dataset MMI and achieved a validation accuracy of 96.06% which is showing the effectiveness of our method.

Orthonormal Polynomial based Optimal EEG Feature Extraction for Motor Imagery Brain-Computer Interface

  • ;박승민;고광은;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제22권6호
    • /
    • pp.793-798
    • /
    • 2012
  • In this paper, we explored the new method for extracting feature from the electroencephalography (EEG) signal based on linear regression technique with the orthonormal polynomial bases. At first, EEG signals from electrodes around motor cortex were selected and were filtered in both spatial and temporal filter using band pass filter for alpha and beta rhymic band which considered related to the synchronization and desynchonization of firing neurons population during motor imagery task. Signal from epoch length 1s were fitted into linear regression with Legendre polynomials bases and extract the linear regression weight as final features. We compared our feature to the state of art feature, power band feature in binary classification using support vector machine (SVM) with 5-fold cross validations for comparing the classification accuracy. The result showed that our proposed method improved the classification accuracy 5.44% in average of all subject over power band features in individual subject study and 84.5% of classification accuracy with forward feature selection improvement.

지폐검사를 위한 UV 패턴의 자동추출 (Automatic Extraction of UV patterns for Paper Money Inspection)

  • 이건호;박태형
    • 한국지능시스템학회논문지
    • /
    • 제21권3호
    • /
    • pp.365-371
    • /
    • 2011
  • 최근에 발행되는 대부분의 지폐는 UV(ultra violet)조명에 반응하는 UV패턴을 포함한다. 본 논문은 지폐검사를 위하여 지폐 내부에 존재하는 UV패턴을 자동으로 추출하는 방법을 제안한다. UV조명을 이용하여 촬영한 영상을 전 처리 과정을 통하여 입력데이터로 변환시킨 후, 가우시안 혼합 모형과 split-and merge EM(SMEM)알고리즘을 적용하여 영상을 몇 개의 영역으로 분리시킨다. 영역 분리된 영상 중 원하는 패턴을 추출하기 위하여, 공분산 벡터의 넓이와 가중치를 이용하는 방법을 새로이 제안한다. 다양한 지폐에 대한 실험을 통하여 제안방법의 유용성을 보인다.

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen;Kwon, Goo-Rak
    • Journal of Multimedia Information System
    • /
    • 제9권3호
    • /
    • pp.177-182
    • /
    • 2022
  • A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.

CEGI를 이용한 3D 메쉬 워터마킹 (3D Mesh Watermarking Using CEGI)

  • 이석환;김태수;김승진;권기룡;이건일
    • 한국통신학회논문지
    • /
    • 제29권4C호
    • /
    • pp.472-484
    • /
    • 2004
  • 본 논문에서는 CEGI (Complex Extended Gaussian Image)를 이용한 3D 메쉬 모델 워터마킹 알고리즘을 제안하였다. 제안한 알고리즘에서는 VRML 데이터의 3D 메쉬 모델을 6개 패치로 분할한 후, 각 패치의 CEGI 분포에서 복소 가중치의 크기가 큰 셀에 투영되는 메쉬의 법선 백터 방향에 워터마크를 삽입한다. 그리고 각 패치의 중점 좌표 및 CEGI 크기 분포의 우선 순위 정보를 이용하여 워터마크를 추출한다. 또한 아편 (affine) 변형된 모델에서는 패치의 초기 중점 좌표의 재배열 과정을 이용하여 원 모델의 방향으로 전환한 후, 워터마크를 추출한다. 본 논문에서 제안한 알고리즘의 성능을 평가하기 위한 실험에서 기하학적 및 위상학적 변형에 강인한 특성을 가짐을 확인하였다.

등가의 Wiener-Hopf 방정식을 이용한 수정된 Gram-Schmidt 알고리즘 (Modified Gram-Schmidt Algorithm Using Equivalent Wiener-Hopf Equation)

  • 안봉만;황지원;조주필
    • 한국통신학회논문지
    • /
    • 제33권7C호
    • /
    • pp.562-568
    • /
    • 2008
  • 본 논문에서는 Gram-Schmidt 알고리즘에서 TDL(Tapped Delay Line) 필터의 계수를 구하는 방법과 등가의 Wiener-Hopf 방식의 해를 구하는 방법 중 정규화 알고리즘 두 가지를 제안한다. 이론적 해석에서 기존의 NLMS(Normalized Least Mean Square) 알고리즘이 입력의 파워의 합으로 정규화 하는 것에 비해 제안한 정규화 알고리즘들은 고유값들의 합으로 정규화 한다. 컴퓨터 모의실험에서 두 개의 pole이 단위원 밖의 근접한 위치를 가지는 불안정한 환경에서 시스템 식별을 수행하였다. 결과적으로, 제안한 두 개의 알고리즘은 Gram-Schmidt 알고리즘에서 TDL 필터의 계수를 회귀적으로 구할 수 있었고 기존의 NLMS 알고리즘에 비하여 우수한 수렴 성능을 나타냄을 알 수 있었다.

Real-time comprehensive image processing system for detecting concrete bridges crack

  • Lin, Weiguo;Sun, Yichao;Yang, Qiaoning;Lin, Yaru
    • Computers and Concrete
    • /
    • 제23권6호
    • /
    • pp.445-457
    • /
    • 2019
  • Cracks are an important distress of concrete bridges, and may reduce the life and safety of bridges. However, the traditional manual crack detection means highly depend on the experience of inspectors. Furthermore, it is time-consuming, expensive, and often unsafe when inaccessible position of bridge is to be assessed, such as viaduct pier. To solve this question, the real-time automatic crack detecting system with unmanned aerial vehicle (UAV) become a choice. This paper designs a new automatic detection system based on real-time comprehensive image processing for bridge crack. It has small size, light weight, low power consumption and can be carried on a small UAV for real-time data acquisition and processing. The real-time comprehensive image processing algorithm used in this detection system combines the advantage of connected domain area, shape extremum, morphology and support vector data description (SVDD). The performance and validity of the proposed algorithm and system are verified. Compared with other detection method, the proposed system can effectively detect cracks with high detection accuracy and high speed. The designed system in this paper is suitable for practical engineering applications.