• Title/Summary/Keyword: Pattern Normalization

Search Result 91, Processing Time 0.026 seconds

Diagnosis Method for Stator-Faults in Induction Motor using Park's Vector Pattern and Convolution Neural Network (Park's Vector 패턴과 CNN을 이용한 유도전동기 고정자 고장진단방법)

  • Goh, Yeong-Jin;Kim, Gwi-Nam;Kim, YongHyeon;Lee, Buhm;Kim, Kyoung-Min
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.883-889
    • /
    • 2020
  • In this paper, we propose a method to use PV(Park's Vector) pattern for inductive motor stator fault diagnosis using CNN(Convolution Neural Network). The conventional CNN based fault diagnosis method was performed by imaging three-phase currents, but this method was troublesome to perform normalization by artificially setting the starting point and phase of current. However, when using PV pattern, the problem of normalization could be solved because the 3-phase current shows a certain circular pattern. In addition, the proposed method is proved to be superior in the accuracy of CNN by 18.18[%] compared to the previous current data image due to the autonomic normalization.

Comparative Analysis of BP and SOM for Partial Discharge Pattern Recognition (부분방전 패턴인식에 대한 BP 및 SOM 알고리즘 비교 분석)

  • Lee, Ho-Keun;Kim, Jeong-Tae;Lim, Yoon-Seok;Kim, Ji-Hong;Koo, Ja-Yoon
    • Proceedings of the KIEE Conference
    • /
    • 2004.07c
    • /
    • pp.1930-1932
    • /
    • 2004
  • SOM(Self Organizing Map) algorithm which has some advantages such as data accumulation ability and the degradation trend trace ability was compared with conventionally used BP(Back Propagation) algorithm. For the purpose, partial discharge data were acquired and analysed from the artificial defects in GIS. As a result, basically the pattern recognition rate of BP algorithm was found out to be better than that of SOM algorithm. However, SOM algorithm showed a great on-site-applicability such as ability of suggesting new-pattern-possibility. Therefore, through increasing pattern recognition rate it is possible to apply SOM algorithm to partial discharge analysis. Also, for the image processing method it is required the normalization of the PRPDA graph. However, due to the normalization both BP and SOM algorithm have shown worse results, so that it is required further study to solve the problem.

  • PDF

Development of a Vehicle Classification Algorithm Using an Inductive Loop Detector on a Freeway (단일 루프 검지기를 이용한 차종 분류 알고리즘 개발)

  • 이승환;조한선;최기주
    • Journal of Korean Society of Transportation
    • /
    • v.14 no.1
    • /
    • pp.135-154
    • /
    • 1996
  • This paper presents a heuristic algorithm for classifying vehicles using a single loop detector. The data used for the development of the algorithm are the frequency variation of a vehicle sensored from the circle-shaped loop detectors which are normal buried beneath the expressway. The pre-processing of data is required for the development of the algorithm that actually consists of two parts. One is both normalization of occupancy time and that with frequency variation, the other is finding of an adaptable number of sample size for each vehicle category and calculation of average value of normalized frequencies along with occupancy time that will be stored for comparison. Then, detected values are compared with those stored data to locate the most fitted pattern. After the normalization process, we developed some frameworks for comparison schemes. The fitted scales used were 10 and 15 frames in occupancy time(X-axis) and 10 and 15 frames in frequency variation (Y-axis). A combination of X-Y 10-15 frame turned out to be the most efficient scale of normalization producing 96 percent correct classification rate for six types of vehicle.

  • PDF

Transformation Based Walking Speed Normalization for Gait Recognition

  • Kovac, Jure;Peer, Peter
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2690-2701
    • /
    • 2013
  • Humans are able to recognize small number of people they know well by the way they walk. This ability represents basic motivation for using human gait as the means for biometric identification. Such biometric can be captured at public places from a distance without subject's collaboration, awareness or even consent. Although current approaches give encouraging results, we are still far from effective use in practical applications. In general, methods set various constraints to circumvent the influence factors like changes of view, walking speed, capture environment, clothing, footwear, object carrying, that have negative impact on recognition results. In this paper we investigate the influence of walking speed variation to different visual based gait recognition approaches and propose normalization based on geometric transformations, which mitigates its influence on recognition results. With the evaluation on MoBo gait dataset we demonstrate the benefits of using such normalization in combination with different types of gait recognition approaches.

Verification of Normalized Confidence Measure Using n-Phone Based Statistics

  • Kim, Byoung-Don;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.123-134
    • /
    • 2005
  • Confidence measure (CM) is used for the rejection of mis-recognized words in an automatic speech recognition (ASR) system. Rahim, Lee, Juang and Cho's confidence measure (RLJC-CM) is one of the widely-used CMs [1]. The RLJC-CM is calculated by averaging phone-level CMs. An extension of the RLJC-CM was achieved by Kim et al [2]. They devised the normalized CM (NCM), which is a statistically normalized version of the RLJC-CM by using the tri-phone based CM normalization. In this paper we verify the NCM by generalizing tri-phone to n-phone unit. To apply various units for the normalization, mono-phone, tri-phone, quin-phone and $\infty$-phone are tested. By the experiments in the domain of the isolated word recognition we show that tri-phone based normalization is sufficient enough to enhance the rejection performance of the ASR system. Also we explain the NCM in regard to two class pattern classification problems.

  • PDF

Automatic Generation of Code Optimizer for DFA Pattern Matching (DFA 패턴 매칭을 위한 코드 최적화기의 자동적 생성)

  • Yun, Sung-Lim;Oh, Se-Man
    • The KIPS Transactions:PartA
    • /
    • v.14A no.1 s.105
    • /
    • pp.31-38
    • /
    • 2007
  • Code Optimization is converting to a code that is equivalent to given program but more efficient, and this process is processed in Code Optimizer. This paper designed and processed Code Optimizer Generator that automatically generates Code Optimizer. In other words Code Optimizer is automatically generated for DFA Pattern Matching which finds the optimal code for the incoming pattern description. DFA Pattern Matching removes redundancy comparisons that occur when patterns are sought for through normalization process and improves simplification and structure of pattern shapes for low cost. Automatic generation of Code Optimization for DFA Pattern Matching eliminates extra effort to generate Code Optimizer every time the code undergoes various transformations, and enables formalism of Code Optimization. Also, the advantage of making DFA for optimization is that it is faster and saves cost of Code Optimizer Generator.

Multi-Frame-Based Super Resolution Algorithm by Using Motion Vector Normalization and Edge Pattern Analysis (움직임 벡터의 정규화 및 에지의 패턴 분석을 이용한 복수 영상 기반 초해상도 영상 생성 기법)

  • Kwon, Soon-Chan;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.2
    • /
    • pp.164-173
    • /
    • 2013
  • In this paper, we propose multi-frame based super resolution algorithm by using motion vector normalization and edge pattern analysis. Existing algorithms have constraints of sub-pixel motion and global translation between frames. Thus, applying of algorithms is limited. And single-frame based super resolution algorithm by using discrete wavelet transform which robust to these problems is proposed but it has another problem that quantity of information for interpolation is limited. To solve these problems, we propose motion vector normalization and edge pattern analysis for 2*2 block motion estimation. The experimental results show that the proposed algorithm has better performance than other conventional algorithms.

Efficient two-step pattern matching method for off-line recognition of handwritten Hangul (필기체 한글의 오프라인 인식을 위한 효과적인 두 단계 패턴 정합 방법)

  • 박정선;이성환
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.4
    • /
    • pp.1-8
    • /
    • 1994
  • In this paper, we propose an efficient two-step pattern matching method which promises shape distortion-tolerant recognition of handwritten of handwritten Hangul syllables. In the first step, nonlinear shape normalization is carried out to compensate for global shape distortions in handwritten characters, then a preliminary classification based on simple pattern matching is performed. In the next step, nonlinear pattern matching which achieves best matching between input and reference pattern is carried out to compensate for local shape distortions, then detailed classification which determines the final result of classification is performed. As the performance of recognition systems based on pattern matching methods is greatly effected by the quality of reference patterns. we construct reference patterns by combining the proposed nonlinear pattern matching method with a well-known averaging techniques. Experimental results reveal that recognition performance is greatly improved by the proposed two-step pattern matching method and the reference pattern construction scheme.

  • PDF

The Algorithm Design and Implement of Microarray Data Classification using the Byesian Method (베이지안 기법을 적용한 마이크로어레이 데이터 분류 알고리즘 설계와 구현)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2283-2288
    • /
    • 2006
  • As development in technology of bioinformatics recently makes it possible to operate micro-level experiments, we can observe the expression pattern of total genome through on chip and analyze the interactions of thousands of genes at the same time. Thus, DNA microarray technology presents the new directions of understandings for complex organisms. Therefore, it is required how to analyze the enormous gene information obtained through this technology effectively. In this thesis, We used sample data of bioinformatics core group in harvard university. It designed and implemented system that evaluate accuracy after dividing in class of two using Bayesian algorithm, ASA, of feature extraction method through normalization process, reducing or removing of noise that occupy by various factor in microarray experiment. It was represented accuracy of 98.23% after Lowess normalization.

Color Modification Detection Using Normalization and Weighted Sum of Color Components (컬러 성분의 정규화와 가중치 합을 이용한 컬러 조작 검출)

  • Shin, Hyun Jun;Jeon, Jong Ju;Eom, Il Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.12
    • /
    • pp.111-119
    • /
    • 2016
  • Most commercial digital cameras acquire the colors of an image through the color filter array, and interpolate missing pixels of the image. Because of this fact, original pixels and interpolated pixels have different statistical characteristics. If colors of an image are modified, the color filter array pattern that consists of RGB channels is changed. Using this pattern change, a color forgery detection method were presented. The conventional method uses the number of pixels that exceeds the maximum or minimum value of pre-defined block by only exploiting green component. However, this algorithm cannot remove the flat area which is occurred when color is changed. And the conventional method has demerit that cannot detect the forged image with rare green pixels. In this paper, we propose an enhanced color forgery detection algorithm using the normalization and weighted sum of the color components. Our method can reduce the detection error by using all color components and removing flat area. Through simulations, we observe that our proposed method shows better detection performance compared to the conventional method.