• Title/Summary/Keyword: Korean normalization

Search Result 937, Processing Time 0.032 seconds

Analysis on Topographic Normalization Methods for 2019 Gangneung-East Sea Wildfire Area Using PlanetScope Imagery (2019 강릉-동해 산불 피해 지역에 대한 PlanetScope 영상을 이용한 지형 정규화 기법 분석)

  • Chung, Minkyung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.179-197
    • /
    • 2020
  • Topographic normalization reduces the terrain effects on reflectance by adjusting the brightness values of the image pixels to be equal if the pixels cover the same land-cover. Topographic effects are induced by the imaging conditions and tend to be large in high mountainousregions. Therefore, image analysis on mountainous terrain such as estimation of wildfire damage assessment requires appropriate topographic normalization techniques to yield accurate image processing results. However, most of the previous studies focused on the evaluation of topographic normalization on satellite images with moderate-low spatial resolution. Thus, the alleviation of topographic effects on multi-temporal high-resolution images was not dealt enough. In this study, the evaluation of terrain normalization was performed for each band to select the optimal technical combinations for rapid and accurate wildfire damage assessment using PlanetScope images. PlanetScope has considerable potential in the disaster management field as it satisfies the rapid image acquisition by providing the 3 m resolution daily image with global coverage. For comparison of topographic normalization techniques, seven widely used methods were employed on both pre-fire and post-fire images. The analysis on bi-temporal images suggests the optimal combination of techniques which can be applied on images with different land-cover composition. Then, the vegetation index was calculated from the images after the topographic normalization with the proposed method. The wildfire damage detection results were obtained by thresholding the index and showed improvementsin detection accuracy for both object-based and pixel-based image analysis. In addition, the burn severity map was constructed to verify the effects oftopographic correction on a continuous distribution of brightness values.

An Improved Image Classification Using Batch Normalization and CNN (배치 정규화와 CNN을 이용한 개선된 영상분류 방법)

  • Ji, Myunggeun;Chun, Junchul;Kim, Namgi
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.35-42
    • /
    • 2018
  • Deep learning is known as a method of high accuracy among several methods for image classification. In this paper, we propose a method of enhancing the accuracy of image classification using CNN with a batch normalization method for classification of images using deep CNN (Convolutional Neural Network). In this paper, we propose a method to add a batch normalization layer to existing neural networks to enhance the accuracy of image classification. Batch normalization is a method to calculate and move the average and variance of each batch for reducing the deflection in each layer. In order to prove the superiority of the proposed method, Accuracy and mAP are measured by image classification experiments using five image data sets SHREC13, MNIST, SVHN, CIFAR-10, and CIFAR-100. Experimental results showed that the CNN with batch normalization is better classification accuracy and mAP rather than using the conventional CNN.

A study of Traditional Korean Medicine(TKM) term's Normalization for Enlarged Reference terminology model (참조용어(Reference Terminology) 모델 확장을 위한 한의학용어 정형화(Normalization) 연구)

  • Jeon, Byoung-Uk;Hong, Seong-Cheon
    • Journal of the Korean Institute of Oriental Medical Informatics
    • /
    • v.15 no.2
    • /
    • pp.1-6
    • /
    • 2009
  • The discipline of terminology is based on its own theoretical principles and consists primarily of the following aspects: analysing the concepts and concept structures used in a field or domain of activity, identifying the terms assigned to the concepts, in the case of bilingual or multilingual terminology, establishing correspondences between terms in the various languages, creating new terms, as required. The word properties has syntax, morphology and orthography. The syntax is that how words are put together. The morphology is consist of inflection, derivation, and compounding. The orthography is spelling. Otherwise, the terms of TKM(Traditional Korean Medicine) is two important element of visual character and phonetic notation. A visual character consist of spell, sort words, stop words, etc. For example, that is a case of sort words in which this '다한', '한다', '多汗', '汗多' as same. A phonetic notation consist of palatalization, initial law, etc. For example, that is a case of palatalization in which this '수족랭', '수족냉', '手足冷', '手足冷' as same. Therefore, to enlarged reference terminology is a method by term's normalization. For such a reason, TKM's terms of normalization is necessary.

  • PDF

Comparison of Normalizations for cDNA Microarray Data

  • Kim, Yun-Hui;Kim, Ho;Park, Ung-Yang;Seo, Jin-Yeong;Jeong, Jin-Ho
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.05a
    • /
    • pp.175-181
    • /
    • 2002
  • cDNA microarray experiments permit us to investigate the expression levels of thousands of genes simultaneously and to make it easy to compare gene expression from different populations. However, researchers are asked to be cautious in interpreting the results because of the unexpected sources of variation such as systematic errors from the microarrayer and the difference of cDNA dye intensity. And the scanner itself calculates both of mean and median of the signal and background pixels, so it follows a selection which raw data will be used in analysis. In this paper, we compare the results in each case of using mean and median from the raw data and normalization methods in reducing the systematic errors with arm's skin cells of old and young males. Using median is preferable to mean because the distribution of the test statistic (t-statistic) from the median is more close to normal distribution than that from mean. Scaled print tip normalization is better than global or lowess normalization due to the distribution of the test-statistic.

  • PDF

Effects of Normalization and Aggregation Methods on the Volatility of Rankings and Rank Reversals (정규화 및 통합 방법이 순위의 변동성과 순위 역전에 미치는 영향)

  • Park, Youngsun
    • Journal of Korean Society for Quality Management
    • /
    • v.41 no.4
    • /
    • pp.709-724
    • /
    • 2013
  • Purpose: The purpose of this study is to examine five evaluation models constructed by different normalization and aggregation methods in terms of the volatility of rankings and rank reversals. We also explore how the volatility of rankings of the five models changes and how often the rank reversals occur when the outliers are removed. Methods: We used data published in the Complete University Guide 2014. Two universities with missing values were excluded from the data. The university rankings were derived by using the five models, and then each model's volatility of rankings was measured. The box-plot was used to detect outliers. Results: Model 1 has the lowest volatility among the five models whether or not the outliers are included. Model 5 has the lowest number of rank reversals. Model 3, which has been used by many institutions, appears to be in the middle among the five in terms of the volatility and the rank reversals. Conclusion: The university rankings vary from one evaluation model to another depending on what normalization and aggregation methods are used. No single model exhibits clear superiority over others in both the volatility and the rank reversal. The findings of this study are expected to provide a stepping stone toward a superior model which is both reliable and robust.

Design of the Normalization Unit for a Low-Power and Area-Efficient Turbo Decoders (저전력 및 면적 효율적인 터보 복호기를 위한 정규화 유닛 설계)

  • Moon, Je-Woo;Kim, Sik;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.11C
    • /
    • pp.1052-1061
    • /
    • 2003
  • This paper proposes a novel normalization scheme in the state metric calculation unit for the Block-wise MAP Turbo decoder. The proposed scheme subtracts one of four metrics from the state metrics in a trellis stage and shifts, if necessary, those metrics for normalization. The proposed architecture can reduce power consumption and memory requirement by reducing the number of the state metrics by one in a trellis stage in the Block-wise MAP decoder which requires an intensive state metric calculations. Simulation results show that dynamic power has been reduced by 17.9% and area has been reduced by 6.6% in the Turbo decoder employing the proposed normalization scheme, when compared to the conventional Block-wise MAP Turbo decoders.

A Concordance Study of the Preprocessing Orders in Microarray Data (마이크로어레이 자료의 사전 처리 순서에 따른 검색의 일치도 분석)

  • Kim, Sang-Cheol;Lee, Jae-Hwi;Kim, Byung-Soo
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.3
    • /
    • pp.585-594
    • /
    • 2009
  • Researchers of microarray experiment transpose processed images of raw data to possible data of statistical analysis: it is preprocessing. Preprocessing of microarray has image filtering, imputation and normalization. There have been studied about several different methods of normalization and imputation, but there was not further study on the order of the procedures. We have no further study about which things put first on our procedure between normalization and imputation. This study is about the identification of differentially expressed genes(DEG) on the order of the preprocessing steps using two-dye cDNA microarray in colon cancer and gastric cancer. That is, we check for compare which combination of imputation and normalization steps can detect the DEG. We used imputation methods(K-nearly neighbor, Baysian principle comparison analysis) and normalization methods(global, within-print tip group, variance stabilization). Therefore, preprocessing steps have 12 methods. We identified concordance measure of DEG using the datasets to which the 12 different preprocessing orders were applied. When we applied preprocessing using variance stabilization of normalization method, there was a little variance in a sensitive way for detecting DEG.

Relative Radiometric Normalization of Hyperion Hyperspectral Images Through Automatic Extraction of Pseudo-Invariant Features for Change Detection (자동 PIF 추출을 통한 Hyperion 초분광영상의 상대 방사정규화 - 변화탐지를 목적으로)

  • Kim, Dae-Sung;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.2
    • /
    • pp.129-137
    • /
    • 2008
  • This study focuses on the radiometric normalization, which is one of the pre-processing steps to apply the change detection technique fur hyperspectral images. The PIFs which had radiometric consistency under the time interval were automatically extracted by applying spectral angle, and used as sample pixels for linear regression of the radiometric normalization. We also dealt with the problem about the number of PIFs for linear regression with iteratively quantitative methods. The results were assessed in comparison with image regression, histogram matching, and FLAASH. In conclusion, we show that linear regression method with PIFs can carry out the efficient result for radiometric normalization.

A Brief Verification Study on the Normalization and Translation Invariant of Measurement Data for Seaport Efficiency : DEA Approach (항만효율성 측정 자료의 정규성과 변환 불변성 검증 소고 : DEA접근)

  • Park, Ro-Kyung;Park, Gil-Young
    • Journal of Korea Port Economic Association
    • /
    • v.23 no.2
    • /
    • pp.109-120
    • /
    • 2007
  • The purpose of this paper is to verify the two problems(normalization for the different inputs and outputs data, translation invariant for the negative data) which will be occurred in measuring the seaport DEA(data envelopment analysis) efficiency. The main result is as follow: Normalization and translation invariant in the BCC model for measuring the seaport efficiency by using 26 Korean seaport data in 1995 with two inputs(berthing capacity, cargo handling capacity) and three outputs(import cargo throughput, export cargo throughput, number of ship calls) was verified. The main policy implication of this paper is that the port management authority should collect the more specific data and publish these data on the inputs and outputs in the seaports with consideration of negative(ex. accident numbers in each seaport) and positive value for analyzing the efficiency by the scholars, because normalization and translation invariant in the data was verified.

  • PDF

Semi-supervised Software Defect Prediction Model Based on Tri-training

  • Meng, Fanqi;Cheng, Wenying;Wang, Jingdong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4028-4042
    • /
    • 2021
  • Aiming at the problem of software defect prediction difficulty caused by insufficient software defect marker samples and unbalanced classification, a semi-supervised software defect prediction model based on a tri-training algorithm was proposed by combining feature normalization, over-sampling technology, and a Tri-training algorithm. First, the feature normalization method is used to smooth the feature data to eliminate the influence of too large or too small feature values on the model's classification performance. Secondly, the oversampling method is used to expand and sample the data, which solves the unbalanced classification of labelled samples. Finally, the Tri-training algorithm performs machine learning on the training samples and establishes a defect prediction model. The novelty of this model is that it can effectively combine feature normalization, oversampling techniques, and the Tri-training algorithm to solve both the under-labelled sample and class imbalance problems. Simulation experiments using the NASA software defect prediction dataset show that the proposed method outperforms four existing supervised and semi-supervised learning in terms of Precision, Recall, and F-Measure values.