• Title/Summary/Keyword: Korean normalization

Search Result 925, Processing Time 0.032 seconds

An Improved Image Classification Using Batch Normalization and CNN (배치 정규화와 CNN을 이용한 개선된 영상분류 방법)

  • Ji, Myunggeun;Chun, Junchul;Kim, Namgi
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.35-42
    • /
    • 2018
  • Deep learning is known as a method of high accuracy among several methods for image classification. In this paper, we propose a method of enhancing the accuracy of image classification using CNN with a batch normalization method for classification of images using deep CNN (Convolutional Neural Network). In this paper, we propose a method to add a batch normalization layer to existing neural networks to enhance the accuracy of image classification. Batch normalization is a method to calculate and move the average and variance of each batch for reducing the deflection in each layer. In order to prove the superiority of the proposed method, Accuracy and mAP are measured by image classification experiments using five image data sets SHREC13, MNIST, SVHN, CIFAR-10, and CIFAR-100. Experimental results showed that the CNN with batch normalization is better classification accuracy and mAP rather than using the conventional CNN.

A study of Traditional Korean Medicine(TKM) term's Normalization for Enlarged Reference terminology model (참조용어(Reference Terminology) 모델 확장을 위한 한의학용어 정형화(Normalization) 연구)

  • Jeon, Byoung-Uk;Hong, Seong-Cheon
    • Journal of the Korean Institute of Oriental Medical Informatics
    • /
    • v.15 no.2
    • /
    • pp.1-6
    • /
    • 2009
  • The discipline of terminology is based on its own theoretical principles and consists primarily of the following aspects: analysing the concepts and concept structures used in a field or domain of activity, identifying the terms assigned to the concepts, in the case of bilingual or multilingual terminology, establishing correspondences between terms in the various languages, creating new terms, as required. The word properties has syntax, morphology and orthography. The syntax is that how words are put together. The morphology is consist of inflection, derivation, and compounding. The orthography is spelling. Otherwise, the terms of TKM(Traditional Korean Medicine) is two important element of visual character and phonetic notation. A visual character consist of spell, sort words, stop words, etc. For example, that is a case of sort words in which this '다한', '한다', '多汗', '汗多' as same. A phonetic notation consist of palatalization, initial law, etc. For example, that is a case of palatalization in which this '수족랭', '수족냉', '手足冷', '手足冷' as same. Therefore, to enlarged reference terminology is a method by term's normalization. For such a reason, TKM's terms of normalization is necessary.

  • PDF

Comparison of Normalizations for cDNA Microarray Data

  • Kim, Yun-Hui;Kim, Ho;Park, Ung-Yang;Seo, Jin-Yeong;Jeong, Jin-Ho
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.05a
    • /
    • pp.175-181
    • /
    • 2002
  • cDNA microarray experiments permit us to investigate the expression levels of thousands of genes simultaneously and to make it easy to compare gene expression from different populations. However, researchers are asked to be cautious in interpreting the results because of the unexpected sources of variation such as systematic errors from the microarrayer and the difference of cDNA dye intensity. And the scanner itself calculates both of mean and median of the signal and background pixels, so it follows a selection which raw data will be used in analysis. In this paper, we compare the results in each case of using mean and median from the raw data and normalization methods in reducing the systematic errors with arm's skin cells of old and young males. Using median is preferable to mean because the distribution of the test statistic (t-statistic) from the median is more close to normal distribution than that from mean. Scaled print tip normalization is better than global or lowess normalization due to the distribution of the test-statistic.

  • PDF

Effects of Normalization and Aggregation Methods on the Volatility of Rankings and Rank Reversals (정규화 및 통합 방법이 순위의 변동성과 순위 역전에 미치는 영향)

  • Park, Youngsun
    • Journal of Korean Society for Quality Management
    • /
    • v.41 no.4
    • /
    • pp.709-724
    • /
    • 2013
  • Purpose: The purpose of this study is to examine five evaluation models constructed by different normalization and aggregation methods in terms of the volatility of rankings and rank reversals. We also explore how the volatility of rankings of the five models changes and how often the rank reversals occur when the outliers are removed. Methods: We used data published in the Complete University Guide 2014. Two universities with missing values were excluded from the data. The university rankings were derived by using the five models, and then each model's volatility of rankings was measured. The box-plot was used to detect outliers. Results: Model 1 has the lowest volatility among the five models whether or not the outliers are included. Model 5 has the lowest number of rank reversals. Model 3, which has been used by many institutions, appears to be in the middle among the five in terms of the volatility and the rank reversals. Conclusion: The university rankings vary from one evaluation model to another depending on what normalization and aggregation methods are used. No single model exhibits clear superiority over others in both the volatility and the rank reversal. The findings of this study are expected to provide a stepping stone toward a superior model which is both reliable and robust.

Design of the Normalization Unit for a Low-Power and Area-Efficient Turbo Decoders (저전력 및 면적 효율적인 터보 복호기를 위한 정규화 유닛 설계)

  • Moon, Je-Woo;Kim, Sik;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.11C
    • /
    • pp.1052-1061
    • /
    • 2003
  • This paper proposes a novel normalization scheme in the state metric calculation unit for the Block-wise MAP Turbo decoder. The proposed scheme subtracts one of four metrics from the state metrics in a trellis stage and shifts, if necessary, those metrics for normalization. The proposed architecture can reduce power consumption and memory requirement by reducing the number of the state metrics by one in a trellis stage in the Block-wise MAP decoder which requires an intensive state metric calculations. Simulation results show that dynamic power has been reduced by 17.9% and area has been reduced by 6.6% in the Turbo decoder employing the proposed normalization scheme, when compared to the conventional Block-wise MAP Turbo decoders.

A Concordance Study of the Preprocessing Orders in Microarray Data (마이크로어레이 자료의 사전 처리 순서에 따른 검색의 일치도 분석)

  • Kim, Sang-Cheol;Lee, Jae-Hwi;Kim, Byung-Soo
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.3
    • /
    • pp.585-594
    • /
    • 2009
  • Researchers of microarray experiment transpose processed images of raw data to possible data of statistical analysis: it is preprocessing. Preprocessing of microarray has image filtering, imputation and normalization. There have been studied about several different methods of normalization and imputation, but there was not further study on the order of the procedures. We have no further study about which things put first on our procedure between normalization and imputation. This study is about the identification of differentially expressed genes(DEG) on the order of the preprocessing steps using two-dye cDNA microarray in colon cancer and gastric cancer. That is, we check for compare which combination of imputation and normalization steps can detect the DEG. We used imputation methods(K-nearly neighbor, Baysian principle comparison analysis) and normalization methods(global, within-print tip group, variance stabilization). Therefore, preprocessing steps have 12 methods. We identified concordance measure of DEG using the datasets to which the 12 different preprocessing orders were applied. When we applied preprocessing using variance stabilization of normalization method, there was a little variance in a sensitive way for detecting DEG.

Relative Radiometric Normalization of Hyperion Hyperspectral Images Through Automatic Extraction of Pseudo-Invariant Features for Change Detection (자동 PIF 추출을 통한 Hyperion 초분광영상의 상대 방사정규화 - 변화탐지를 목적으로)

  • Kim, Dae-Sung;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.2
    • /
    • pp.129-137
    • /
    • 2008
  • This study focuses on the radiometric normalization, which is one of the pre-processing steps to apply the change detection technique fur hyperspectral images. The PIFs which had radiometric consistency under the time interval were automatically extracted by applying spectral angle, and used as sample pixels for linear regression of the radiometric normalization. We also dealt with the problem about the number of PIFs for linear regression with iteratively quantitative methods. The results were assessed in comparison with image regression, histogram matching, and FLAASH. In conclusion, we show that linear regression method with PIFs can carry out the efficient result for radiometric normalization.

A Brief Verification Study on the Normalization and Translation Invariant of Measurement Data for Seaport Efficiency : DEA Approach (항만효율성 측정 자료의 정규성과 변환 불변성 검증 소고 : DEA접근)

  • Park, Ro-Kyung;Park, Gil-Young
    • Journal of Korea Port Economic Association
    • /
    • v.23 no.2
    • /
    • pp.109-120
    • /
    • 2007
  • The purpose of this paper is to verify the two problems(normalization for the different inputs and outputs data, translation invariant for the negative data) which will be occurred in measuring the seaport DEA(data envelopment analysis) efficiency. The main result is as follow: Normalization and translation invariant in the BCC model for measuring the seaport efficiency by using 26 Korean seaport data in 1995 with two inputs(berthing capacity, cargo handling capacity) and three outputs(import cargo throughput, export cargo throughput, number of ship calls) was verified. The main policy implication of this paper is that the port management authority should collect the more specific data and publish these data on the inputs and outputs in the seaports with consideration of negative(ex. accident numbers in each seaport) and positive value for analyzing the efficiency by the scholars, because normalization and translation invariant in the data was verified.

  • PDF

Semi-supervised Software Defect Prediction Model Based on Tri-training

  • Meng, Fanqi;Cheng, Wenying;Wang, Jingdong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4028-4042
    • /
    • 2021
  • Aiming at the problem of software defect prediction difficulty caused by insufficient software defect marker samples and unbalanced classification, a semi-supervised software defect prediction model based on a tri-training algorithm was proposed by combining feature normalization, over-sampling technology, and a Tri-training algorithm. First, the feature normalization method is used to smooth the feature data to eliminate the influence of too large or too small feature values on the model's classification performance. Secondly, the oversampling method is used to expand and sample the data, which solves the unbalanced classification of labelled samples. Finally, the Tri-training algorithm performs machine learning on the training samples and establishes a defect prediction model. The novelty of this model is that it can effectively combine feature normalization, oversampling techniques, and the Tri-training algorithm to solve both the under-labelled sample and class imbalance problems. Simulation experiments using the NASA software defect prediction dataset show that the proposed method outperforms four existing supervised and semi-supervised learning in terms of Precision, Recall, and F-Measure values.

Generative Korean Inverse Text Normalization Model Combining a Bi-LSTM Auxiliary Model (Bi-LSTM 보조 신경망 모델을 결합한 생성형 한국어 Inverse Text Normalization 모델)

  • Jeongje Jo;Dongsu Shin;Kyeongbin Jo;Youngsub Han;Byoungki Jeon
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.716-721
    • /
    • 2023
  • Inverse Text Normalization(ITN) 모델은 음성 인식(STT) 엔진의 중요한 후처리 영역 중 하나이며, STT 인식 결과의 가독성을 개선한다. 최근 ITN 모델에 심층신경망을 활용한 연구가 진행되고 있다. 심층 신경망을 사용하는 대부분의 선행연구는 문장 내 변환이 필요한 부분에 토큰 태깅을 진행하는 방식이다. 그러나 이는 Out-of-vocabulary(OOV) 이슈가 있으며, 학습 데이터 구축 시 토큰 단위의 섬세한 태깅 작업이 필요하다는 한계점이 존재한다. 더불어 선행 연구에서는 STT 인식 결과를 그대로 사용하는데, 이는 띄어쓰기가 중요한 한국어 ITN 처리에 변환 성능을 보장할 수 없다. 본 연구에서는 BART 기반 생성 모델로 생성형 ITN 모델을 구축하였고, Bi-LSTM 기반 보조 신경망 모델을 결합하여 STT 인식 결과에 대한 고유명사 처리, 띄어쓰기 교정 기능을 보완한 모델을 제안한다. 또한 보조 신경망을 통해 생성 모델 처리 여부를 판단하여 평균 추론 속도를 개선하였다. 실험을 통해 두 모델의 각 정량 성능 지표에서 우수한 성능을 확인하였고 결과적으로 본 연구에서 제안하는 두 모델의 결합된 방법론의 효과성을 제시하였다.

  • PDF