• Title/Summary/Keyword: Korean normalization

Search Result 937, Processing Time 0.025 seconds

Semantic Segmentation of Drone Imagery Using Deep Learning for Seagrass Habitat Monitoring (잘피 서식지 모니터링을 위한 딥러닝 기반의 드론 영상 의미론적 분할)

  • Jeon, Eui-Ik;Kim, Seong-Hak;Kim, Byoung-Sub;Park, Kyung-Hyun;Choi, Ock-In
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.199-215
    • /
    • 2020
  • A seagrass that is marine vascular plants plays an important role in the marine ecosystem, so periodic monitoring ofseagrass habitatsis being performed. Recently, the use of dronesthat can easily acquire very high-resolution imagery is increasing to efficiently monitor seagrass habitats. And deep learning based on a convolutional neural network has shown excellent performance in semantic segmentation. So, studies applied to deep learning models have been actively conducted in remote sensing. However, the segmentation accuracy was different due to the hyperparameter, various deep learning models and imagery. And the normalization of the image and the tile and batch size are also not standardized. So,seagrass habitats were segmented from drone-borne imagery using a deep learning that shows excellent performance in this study. And it compared and analyzed the results focused on normalization and tile size. For comparison of the results according to the normalization, tile and batch size, a grayscale image and grayscale imagery converted to Z-score and Min-Max normalization methods were used. And the tile size isincreased at a specific interval while the batch size is allowed the memory size to be used as much as possible. As a result, IoU was 0.26 ~ 0.4 higher than that of Z-score normalized imagery than other imagery. Also, it wasfound that the difference to 0.09 depending on the tile and batch size. The results were different according to the normalization, tile and batch. Therefore, this experiment found that these factors should have a suitable decision process.

A Brief Verification Study on the Normalization and Translation Invariant of Measurement Data for Seaport Efficiency;DEA Approach (항만효율성 측정 자료의 정규성과 변환 불변성 검증소고;DEA접근)

  • Park, Ro-Kyung
    • Proceedings of the Korea Port Economic Association Conference
    • /
    • 2007.07a
    • /
    • pp.391-405
    • /
    • 2007
  • The purpose of this paper is to verify the two problems(normalization for the different inputs and outputs data, and translation invariant for the negative data) which will be occurred in measuring the seaport DEA(data envelopment analysis) efficiency. The main result is as follow: Normalization and translation invariant in the BCC model for measuring the seaport efficiency by using 26 Korean seaport data in 1995 with two inputs(berthing capacity, cargo handling capacity) and three outputs(import cargo throughput, export cargo throughput, number of ship calls) was verified. The main policy implication of this paper is that the port management authority should collect the more specific data and publish these data on the inputs and outputs in the seaports with consideration of negative(ex. accident numbers in each seaport) and positive value for analyzing the efficiency by the scholars, because normalization and translation invariant in the data was verified.

  • PDF

A Study on Object Based Image Analysis Methods for Land Use and Land Cover Classification in Agricultural Areas (변화지역 탐지를 위한 시계열 KOMPSAT-2 다중분광 영상의 MAD 기반 상대복사 보정에 관한 연구)

  • Yeon, Jong-Min;Kim, Hyun-Ok;Yoon, Bo-Yeol
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.3
    • /
    • pp.66-80
    • /
    • 2012
  • It is necessary to normalize spectral image values derived from multi-temporal satellite data to a common scale in order to apply remote sensing methods for change detection, disaster mapping, crop monitoring and etc. There are two main approaches: absolute radiometric normalization and relative radiometric normalization. This study focuses on the multi-temporal satellite image processing by the use of relative radiometric normalization. Three scenes of KOMPSAT-2 imagery were processed using the Multivariate Alteration Detection(MAD) method, which has a particular advantage of selecting PIFs(Pseudo Invariant Features) automatically by canonical correlation analysis. The scenes were then applied to detect disaster areas over Sendai, Japan, which was hit by a tsunami on 11 March 2011. The case study showed that the automatic extraction of changed areas after the tsunami using relatively normalized satellite data via the MAD method was done within a high accuracy level. In addition, the relative normalization of multi-temporal satellite imagery produced better results to rapidly map disaster-affected areas with an increased confidence level.

Evaluation of Image for Phantom according to Normalization, Well Counter Correction in PET-CT (PET-CT Normalization, Well Counter Correction에 따른 팬텀을 이용한 영상 평가)

  • Choong-Woon Lee;Yeon-Wook You;Jong-Woon Mun;Yun-Cheol Kim
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.27 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • Purpose PET-CT imaging require an appropriate quality assurance system to achieve high efficiency and reliability. Quality control is essential for improving the quality of care and patient safety. Currently, there are performance evaluation methods of UN2-1994 and UN2-2001 proposed by NEMA and IEC for PET-CT image evaluation. In this study, we compare phantom images with the same experiments before and after PET-CT 3D normalization and well counter correction and evaluate the usefulness of quality control. Materials and methods Discovery 690 (General Electric Healthcare, USA) PET-CT equiptment was used to perform 3D normalization and well counter correction as recommended by GE Healthcare. Based on the recovery coefficients for the six spheres of the NEMA IEC Body Phantom recommended by the EARL. 20kBq/㎖ of 18F was injected into the sphere of the phantom and 2kBq/㎖ of 18F was injected into the body of phantom. PET-CT scan was performed with a radioacitivity ratio of 10:1. Images were reconstructed by appliying TOF+PSF+TOF, OSEM+PSF, OSEM and Gaussian filter 4.0, 4.5, 5.0, 5.5, 6.0, 6,5 mm with matrix size 128×128, slice thickness 3.75 mm, iteration 2, subset 16 conditions. The PET image was attenuation corrected using the CT images and analyzed using software program AW 4.7 (General Electric Healthcare, USA). The ROI was set to fit 6 spheres in the CT image, RC (Recovery Coefficient) was measured after fusion of PET and CT. Statistical analysis was performed wilcoxon signed rank test using R. Results Overall, after the quality control items were performed, the recovery coefficient of the phantom image increased and measured. Recovery coefficient according to the image reconstruction increased in the order TOF+PSF, TOF, OSEM+PSF, before and after quality control, RCmax increased by OSEM 0.13, OSEM+PSF 0.16, TOF 0.16, TOF+PSF 0.15 and RCmean increased by OSEM 0.09, OSEM+PSF 0.09, TOF 0.106, TOF+PSF 0.10. Both groups showed a statistically significant difference in Wilcoxon signed rank test results (P value<0.001). Conclusion PET-CT system require quality assurance to achieve high efficiency and reliability. Standardized intervals and procedures should be followed for quality control. We hope that this study will be a good opportunity to think about the importance of quality control in PET-CT

  • PDF

A Study on Comparison of Normalization and Weighting Method for Constructing Index about Flood (홍수관련 지표 산정을 위한 표준화 및 가중치 비교 연구)

  • Baeck, Seung-Hyub;Choi, Si-Jung;Hong, Seung-Jin;Kim, Dong-Phil
    • Journal of Wetlands Research
    • /
    • v.13 no.3
    • /
    • pp.411-426
    • /
    • 2011
  • The construction of composite indicators should be normalized and weighted to render them comparable and evaluable variables in the field, which undergoes absence of a distinct methodology and where the application of universally popular method is common. Constructing of indices does not compare and analyze applying various normalizing and weighting, but constructer generally use chosen method and develops indicators and indices in most research. In this study, indices are applied various normalization and weighting methods, thereby analyzing how much impact the index and identifying individual characteristics derive a more reasonable way to help other research in the future. 5 different methods of normalization and 4 different types of weights were compared and analyzed. There are different results depending applied normalized methods and Z-score method best reflects the characteristics of the variables. According to weighting methods, the calculated results show little difference, but the ranking results of indices did not changed significantly. It might be better to provide constructors with a set of normalization and weighting methods to reflect their characteristics in order to build flood indices through the result of this study.

Normalization of Face Images Subject to Directional Illumination using Linear Model (선형모델을 이용한 방향성 조명하의 얼굴영상 정규화)

  • 고재필;김은주;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.1
    • /
    • pp.54-60
    • /
    • 2004
  • Face recognition is one of the problems to be solved by appearance based matching technique. However, the appearance of face image is very sensitive to variation in illumination. One of the easiest ways for better performance is to collect more training samples acquired under variable lightings but it is not practical in real world. ]:n object recognition, it is desirable to focus on feature extraction or normalization technique rather than focus on classifier. This paper presents a simple approach to normalization of faces subject to directional illumination. This is one of the significant issues that cause error in the face recognition process. The proposed method, ICR(illumination Compensation based on Multiple Linear Regression), is to find the plane that best fits the intensity distribution of the face image using the multiple linear regression, then use this plane to normalize the face image. The advantages of our method are simple and practical. The planar approximation of a face image is mathematically defined by the simple linear model. We provide experimental results to demonstrate the performance of the proposed ICR method on public face databases and our database. The experimental results show a significant improvement of the recognition accuracy.

A Corpus-based Study of Translation Universals in English Translations of Korean Newspaper Texts (한국 신문의 영어 번역에 나타난 번역 보편소의 코퍼스 기반 분석)

  • Goh, Gwang-Yoon;Lee, Younghee (Cheri)
    • Cross-Cultural Studies
    • /
    • v.45
    • /
    • pp.109-143
    • /
    • 2016
  • This article examines distinctive linguistic shifts of translational English in an effort to verify the validity of the translation universals hypotheses, including simplification, explicitation, normalization and leveling-out, which have been most heavily explored to date. A large-scale study involving comparable corpora of translated and non-translated English newspaper texts has been carried out to typify particular linguistic attributes inherent in translated texts. The main findings are as follows. First, by employing the parameters of STTR, top-to-bottom frequency words, and mean values of sentence lengths, the translational instances of simplification have been detected across the translated English newspaper corpora. In contrast, the portion of function words produced contrary results, which in turn suggests that this feature might not constitute an effective test of the hypothesis. Second, it was found that the use of connectives was more salient in original English newspaper texts than translated English texts, being incompatible with the explicitation hypothesis. Third, as an indicator of translational normalization, lexical bundles were found to be more pervasive in translated texts than in non-translated texts, which is expected from and therefore support the normalization hypothesis. Finally, the standard deviations of both STTR and mean sentence lengths turned out to be higher in translated texts, indicating that the translated English newspaper texts were less leveled out within the same corpus group, which is opposed to what the leveling-out hypothesis postulates. Overall, the results suggest that not all four hypotheses may qualify for the label translation universals, or at least that some translational predictors are not feasible enough to evaluate the effectiveness of the translation universals hypotheses.