• Title/Summary/Keyword: Normalization approach

Search Result 115, Processing Time 0.022 seconds

Speaker Adaptation in HMM-based Korean Isoklated Word Recognition (한국어 격리단어 인식 시스템에서 HMM 파라미터의 화자 적응)

  • 오광철;이황수;은종관
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.4
    • /
    • pp.351-359
    • /
    • 1991
  • This paper describes performances of speaker adaptation using a probabilistic spectral mapping matrix in hidden-Markov model(HMM) -based Korean isolated word recognition. Speaker adaptation based on probabilistic spectral mapping uses a well-trained prototype HMM's and is carried out by Viterbi, dynamic time warping, and forward-backward algorithms. Among these algorithms, the best performance is obtained by using the Viterbi approach together with codebook adaptation whose improvement for isolated word recognition accuracy is 42.6-68.8 %. Also, the selection of the initial values of the matrix and the normalization in computing the matrix affects the recognition accuracy.

Negative Side Effects of Denormalization-Oriented Data Modeling in Enterprise-Wide Database Design (기업 전사 자료 설계에서 역정규화 중심 데이터 모델링의 부작용)

  • Rhee, Hae-Kyung
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.6 s.312
    • /
    • pp.17-25
    • /
    • 2006
  • As information systems to be computerized get significantly scaled up, data modeling issues apparently considered to be crucial once again as the early 1980's under the terms of data governance, data architecture or data quality. Unfortuately, merely resorting to heuristics-based field approaches with more or less no firm theoretical foundation of knowledge with regard to criteria of data design lead quite often to major failures in efficacy of data modeling. In this paper, we have compared normalization-critical data modeling approach, well-known as the Non-Stop Data Modeling methodology in the literature, to the Information Engineering in which in many occasions the notion of do-normalization is supported and even recommended as a mandatory part in its modeling nature. Quantitative analyses have revealed that NS methodology ostensibly outperforms IE methodology in terms of efficiency indices like adequacy of entity judgement, degree of existence of data circulation path that confirms the balancedness of data design and ratio of unnecessary data attribute replication.

Binary Hashing CNN Features for Action Recognition

  • Li, Weisheng;Feng, Chen;Xiao, Bin;Chen, Yanquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4412-4428
    • /
    • 2018
  • The purpose of this work is to solve the problem of representing an entire video using Convolutional Neural Network (CNN) features for human action recognition. Recently, due to insufficient GPU memory, it has been difficult to take the whole video as the input of the CNN for end-to-end learning. A typical method is to use sampled video frames as inputs and corresponding labels as supervision. One major issue of this popular approach is that the local samples may not contain the information indicated by the global labels and sufficient motion information. To address this issue, we propose a binary hashing method to enhance the local feature extractors. First, we extract the local features and aggregate them into global features using maximum/minimum pooling. Second, we use the binary hashing method to capture the motion features. Finally, we concatenate the hashing features with global features using different normalization methods to train the classifier. Experimental results on the JHMDB and MPII-Cooking datasets show that, for these new local features, binary hashing mapping on the sparsely sampled features led to significant performance improvements.

Input Power Normalization of Zero-Error Probability based Algorithms (영오차 확률 기반 알고리즘의 입력 정력 정규화)

  • Kim, Chong-il;Kim, Namyong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.1
    • /
    • pp.1-7
    • /
    • 2017
  • The maximum zero error probability (MZEP) algorithm outperforms MSE (mean squared error)-based algorithms in impulsive noise environment. The magnitude controlled input (MCI) which is inherent in that algorithm is known to plays the role in keeping the algorithm undisturbed from impulsive noise. In this paper, a new approach to normalize the step size of the MZEP with average power of the MCI is proposed. In the simulation under impulsive noise with the impulse incident rate of 0.03, the performance enhancement in steady state MSE of the proposed algorithm, compared to the MZEP, is shown to be by about 2 dB.

Optimized Integer Cosine Transform (최적화 정수형 여현 변환)

  • 이종하;김혜숙;송인준;곽훈성
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.9
    • /
    • pp.1207-1214
    • /
    • 1995
  • We present an optimized integer cosine transform(OICT) as an alternative approach to the conventional discrete cosine transform(DCT), and its fast computational algorithm. In the actual implementation of the OICT, we have used the techniques similar to those of the orthogonal integer transform(OIT). The normalization factors are approximated to single one while keeping the reconstruction error at the best tolerable level. By obtaining a single normalization factor, both forward and inverse transform are performed using only the integers. However, there are so many sets of integers that are selected in the above manner, the best OICT matrix obtained through value minimizing the Hibert-Schmidt norm and achieving fast computational algorithm. Using matrix decomposing, a fast algorithm for efficient computation of the order-8 OICT is developed, which is minimized to 20 integer multiplications. This enables us to implement a high performance 2-D DCT processor by replacing the floating point operations by the integer number operations. We have also run the simulation to test the performance of the order-8 OICT with the transform efficiency, maximum reducible bits, and mean square error for the Wiener filter. When the results are compared to those of the DCT and OIT, the OICT has out-performed them all. Furthermore, when the conventional DCT coefficients are reduced to 7-bit as those of the OICT, the resulting reconstructed images were critically impaired losing the orthogonal property of the original DCT. However, the 7-bit OICT maintains a zero mean square reconstruction error.

  • PDF

Multichannel Convolution Neural Network Classification for the Detection of Histological Pattern in Prostate Biopsy Images

  • Bhattacharjee, Subrata;Prakash, Deekshitha;Kim, Cho-Hee;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1486-1495
    • /
    • 2020
  • The analysis of digital microscopy images plays a vital role in computer-aided diagnosis (CAD) and prognosis. The main purpose of this paper is to develop a machine learning technique to predict the histological grades in prostate biopsy. To perform a multiclass classification, an AI-based deep learning algorithm, a multichannel convolutional neural network (MCCNN) was developed by connecting layers with artificial neurons inspired by the human brain system. The histological grades that were used for the analysis are benign, grade 3, grade 4, and grade 5. The proposed approach aims to classify multiple patterns of images extracted from the whole slide image (WSI) of a prostate biopsy based on the Gleason grading system. The Multichannel Convolution Neural Network (MCCNN) model takes three input channels (Red, Green, and Blue) to extract the computational features from each channel and concatenate them for multiclass classification. Stain normalization was carried out for each histological grade to standardize the intensity and contrast level in the image. The proposed model has been trained, validated, and tested with the histopathological images and has achieved an average accuracy of 96.4%, 94.6%, and 95.1%, respectively.

Normalizing interval data and their use in AHP (구간데이터 정규화와 계층적 분석과정에의 활용)

  • Kim, Eun Young;Ahn, Byeong Seok
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.1-11
    • /
    • 2016
  • Entani and Tanaka (2007) presented a new approach for obtaining interval evaluations suitable for handling uncertain data. Above all, their approach is characterized by the normalization of interval data and thus the elimination of redundant bounds. Further, interval global weights in AHP are derived by using such normalized interval data. In this paper, we present a heuristic method for finding extreme points of interval data, which basically extends the method by Entani and Tanaka (2007), and also helps to obtain normalized interval data. In the second part of this paper, we show that the solutions to the linear program for interval global weights can be obtained by a simple inspection. In the meantime, the absolute dominance proposed by the authors is extended to pairwise dominance which makes it possible to identify at least more dominated alternatives under the same information.

An Efficient Method for Solving a Multi-Item Newsboy Problem with a Budget-Constraint and a Reservation Policy (예산 제약과 예약 정책이 있는 복수 제품 신문 배달 소년 문제 해결을 위한 효율적 방법론)

  • Lee, Chang-Yong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.37 no.1
    • /
    • pp.50-59
    • /
    • 2014
  • In this paper, we develop an efficient approach to solve a multiple-item budget-constraint newsboy problem with a reservation policy. A conventional approach for solving such problem utilizes an approximation for the evaluation of an inverse of a Gaussian cumulative density function when the argument of the function is small, and a heuristic method for finding an optimal Lagrangian multiplier. In contrast to the conventional approach, this paper proposes more accurate method of evaluating the function by using the normalization and an effective numerical integration method. We also propose an efficient way to find an optimal Lagrangian multiplier by proving that the equation for the budget-constraint is in fact a monotonically increasing function in the Lagrangian multiplier. Numerical examples are tested to show the performance of the proposed approach with emphases on the behaviors of the inverse of a Gaussian cumulative density function and the Lagrangian multiplier. By using sensitivity analysis of different budget constraints, we show that the reservation policy indeed provides greater expected profit than the classical model of not having the reservation policy.

A Real-Time Method for the Diagnosis of Multiple Switch Faults in NPC Inverters Based on Output Currents Analysis

  • Abadi, Mohsen Bandar;Mendes, Andre M.S.;Cruz, Sergio M.A.
    • Journal of Power Electronics
    • /
    • v.16 no.4
    • /
    • pp.1415-1425
    • /
    • 2016
  • This paper presents a new approach for fault diagnosis in three-level neutral point clamped inverters. The proposed method is based on the average values of the positive and negative parts of normalized output currents. This method is capable of detecting and locating multiple open-circuit faults in the controlled power switches of converters in half of a fundamental period of those currents. The implementation of this diagnostic approach only requires two output currents of the inverter. Therefore, no additional sensors are needed other than the ones already used by the control system of a drive based on this type of converter. Moreover, through the normalization of currents, the diagnosis is independent of the load level of the converter. The performance and effectiveness of the proposed diagnostic technique are validated by experimental results obtained under steady-state and transient conditions.

Curvature and Histogram of oriented Gradients based 3D Face Recognition using Linear Discriminant Analysis

  • Lee, Yeunghak
    • Journal of Multimedia Information System
    • /
    • v.2 no.1
    • /
    • pp.171-178
    • /
    • 2015
  • This article describes 3 dimensional (3D) face recognition system using histogram of oriented gradients (HOG) based on face curvature. The surface curvatures in the face contain the most important personal feature information. In this paper, 3D face images are recognized by the face components: cheek, eyes, mouth, and nose. For the proposed approach, the first step uses the face curvatures which present the facial features for 3D face images, after normalization using the singular value decomposition (SVD). Fisherface method is then applied to each component curvature face. The reason for adapting the Fisherface method maintains the surface attribute for the face curvature, even though it can generate reduced image dimension. And histogram of oriented gradients (HOG) descriptor is one of the state-of-art methods which have been shown to significantly outperform the existing feature set for several objects detection and recognition. In the last step, the linear discriminant analysis is explained for each component. The experimental results showed that the proposed approach leads to higher detection accuracy rate than other methods.