• 제목/요약/키워드: Data Comparison

검색결과 12,537건 처리시간 0.036초

Comparison of Topex/Poseidon sea surface heights and Tide Gauge sea levels in the South Indian Ocean

  • Yoon, Hong-Joo
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 1998년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.70-75
    • /
    • 1998
  • The comparison of Topex/Poseidon sea surface heights and Tide Gauge sea levels was studied in the South Indian Ocean after Topex/Poseidon mission of about 3 years (11- 121 cycles) from January 1993 through December 1995. The user's handbook (AVISO) for sea surface height data process was used in this study Topex/Poseidon sea suface heights ($\zeta$$^{T/P}$), satellite data at the point which is very closed to Tide Gauge station, were chosen in the same latitude of Tide Gauge station. These data were re-sampled by a linear interpolation with the interval of about 10 days, and were filtered by the gaussian filter with a 60 day-window. Tide Gauge sea levels ($\zeta$$^{Argos}$, $\zeta$$^{In-situ}$ and $\zeta$$^{Model}$), were also treated with the same method as satellite data. The main conclusions obtained from the root-mean-square and correlation coefficient were as follows: 1) to Produce Tide Gauge sea levels from bottom pressure, in-situ data of METEO-FRANCE showed very good values against to the model data of ECMWF and 2) to compare Topex/Poseidon sea surface heights of Tide Gauge sea levels, the results of the open sea areas were better than those of the coast and island areas.

  • PDF

가부반응 데이터 특성을 가지는 탄약 체계의 신뢰도 추정방법 비교 (Comparison of Reliability Estimation Methods for Ammunition Systems with Quantal-response Data)

  • 류장희;백승준;손영갑
    • 한국군사과학기술학회지
    • /
    • 제13권6호
    • /
    • pp.982-989
    • /
    • 2010
  • This paper shows accuracy comparison results of reliability estimation methods for one-shot systems such as ammunitions. Quantal-response data, following a binomial distribution at each sampling time, characterizes lifetimes of one-shot systems. Various quantal-response data of different sample sizes are simulated using lifetime data randomly sampled from assumed weibull distributions with different shape parameters but the identical scale parameter in this paper. Then, reliability estimation methods in open literature are applied to the simulated quantal-response data to estimate true reliability over time. Rankings in estimation accuracy for different sample sizes are determined using t-test of SSE. Furthermore, MSE at each time, including both bias and variance of estimated reliability metrics for each method are analyzed to investigate how much both bias and variance contribute the SSE. From the MSE analysis, MSE provides reliability estimation trend for each method. Parametric estimation method provides more accurate reliability estimation results than the other methods for most of sample sizes.

베이지안 기법을 활용한 공용성 모델개발 연구 (Pavement Performance Model Development Using Bayesian Algorithm)

  • 문성호
    • 한국도로학회논문집
    • /
    • 제18권1호
    • /
    • pp.91-97
    • /
    • 2016
  • PURPOSES : The objective of this paper is to develop a pavement performance model based on the Bayesian algorithm, and compare the measured and predicted performance data. METHODS : In this paper, several pavement types such as SMA (stone mastic asphalt), PSMA (polymer-modified stone mastic asphalt), PMA (polymer-modified asphalt), SBS (styrene-butadiene-styrene) modified asphalt, and DGA (dense-graded asphalt) are modeled in terms of the performance evaluation of pavement structures, using the Bayesian algorithm. RESULTS : From case studies related to the performance model development, the statistical parameters of the mean value and standard deviation can be obtained through the Bayesian algorithm, using the initial performance data of two different pavement cases. Furthermore, an accurate performance model can be developed, based on the comparison between the measured and predicted performance data. CONCLUSIONS : Based on the results of the case studies, it is concluded that the determined coefficients of the nonlinear performance models can be used to accurately predict the long-term performance behaviors of DGA and modified asphalt concrete pavements. In addition, the developed models were evaluated through comparison studies between the initial measurement and prediction data, as well as between the final measurement and prediction data. In the model development, the initial measured data were used.

An Optimization Method for the Calculation of SCADA Main Grid's Theoretical Line Loss Based on DBSCAN

  • Cao, Hongyi;Ren, Qiaomu;Zou, Xiuguo;Zhang, Shuaitang;Qian, Yan
    • Journal of Information Processing Systems
    • /
    • 제15권5호
    • /
    • pp.1156-1170
    • /
    • 2019
  • In recent years, the problem of data drifted of the smart grid due to manual operation has been widely studied by researchers in the related domain areas. It has become an important research topic to effectively and reliably find the reasonable data needed in the Supervisory Control and Data Acquisition (SCADA) system has become an important research topic. This paper analyzes the data composition of the smart grid, and explains the power model in two smart grid applications, followed by an analysis on the application of each parameter in density-based spatial clustering of applications with noise (DBSCAN) algorithm. Then a comparison is carried out for the processing effects of the boxplot method, probability weight analysis method and DBSCAN clustering algorithm on the big data driven power grid. According to the comparison results, the performance of the DBSCAN algorithm outperforming other methods in processing effect. The experimental verification shows that the DBSCAN clustering algorithm can effectively screen the power grid data, thereby significantly improving the accuracy and reliability of the calculation result of the main grid's theoretical line loss.

불균형 블랙박스 동영상 데이터에서 충돌 상황의 다중 분류를 위한 손실 함수 비교 (Comparison of Loss Function for Multi-Class Classification of Collision Events in Imbalanced Black-Box Video Data)

  • 이의상;한석민
    • 한국인터넷방송통신학회논문지
    • /
    • 제24권1호
    • /
    • pp.49-54
    • /
    • 2024
  • 데이터 불균형은 분류 문제에서 흔히 마주치는 문제로, 데이터셋 내의 클래스간 샘플 수의 현저한 차이에서 기인한다. 이러한 데이터 불균형은 일반적으로 분류 모델에서 과적합, 과소적합, 성능 지표의 오해 등의 문제를 야기한다. 이를 해결하기 위한 방법으로는 Resampling, Augmentation, 규제 기법, 손실 함수 조정 등이 있다. 본 논문에서는 손실 함수 조정에 대해 다루며 특히, 불균형 문제를 가진 Multi-Class 블랙박스 동영상 데이터에서 여러 구성의 손실 함수(Cross Entropy, Balanced Cross Entropy, 두 가지 Focal Loss 설정: 𝛼 = 1 및 𝛼 = Balanced, Asymmetric Loss)의 성능을 I3D, R3D_18 모델을 활용하여 비교하였다.

순서 범주형 자료해석법의 비교 연구 (A Study on Comparison with the Methods of Ordered Categorical Data of Analysis)

  • 김홍준;송서일
    • 산업경영시스템학회지
    • /
    • 제20권44호
    • /
    • pp.207-215
    • /
    • 1997
  • This paper deals with a comparison between Taguchi's accumulation analysis method and Nair test on the ordered categorical data from an industrial experiment for quality improvement. a result of Taguchi's accumulation analysis method is shown to have reasonable power for detecting location effects, while Nair test identifies the location and dispersion effects separately, Accordingly, Taguchi's accumulation analysis needs to develop methods for detecting dispersion effects as well as location effects. In addition this paper rewmmends models for analyzing ordered categorical data, for examples, the cumulative legit model, mean response model etc Successively simple, reasonable methods should be introduced more likely to be used by the practitioners.

  • PDF

한국 쌀과 일본 쌀의 물리화학적 특성 연구 (I) NIR을 사용한 한국 쌀과 일본 쌀의 품질 비교 (Comparison of Korean and Japanese Rice Cultivars in Terms of Physicochemical Properties (I) The Comparison of Korean and Japanese Rice by NIR and Chemical Analysis)

  • 김혁일
    • 동아시아식생활학회지
    • /
    • 제14권2호
    • /
    • pp.135-144
    • /
    • 2004
  • A total of 40 Korean and Japanese rice varieties were evaluated for their main chemical components, physical properties, cooking quality, pasting properties, and instrumental measurements. Based on their quality evaluations, it was concluded that Korean and Japanese rice varieties were not significantly different in the basic components of NIR (Near Infra Red) data and the chemical analysis from the uncooked brown and milled rices. Korean rice had a little bit higher protein and amylose contents but much lower fat acidity than those of Japanese rice from the chemical analysis. From all the data of three different kinds of NIR methods, Korean and Japanese milled rice were very similar except the taste score. Japanese rice showed a slightly higher taste score, a little bit higher lightness and whiteness, but lower yellowness and redness than Korean one. From all those data of NIR and the chemical analysis, Korean and Japanese rices had very similar components except the fat content.

  • PDF

Performance Comparison on Speech Codecs for Digital Watermarking Applications

  • Mamongkol, Y.;Amornraksa, T.
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.466-469
    • /
    • 2002
  • Using intelligent information contained within the speech to identify the specific hidden data in the watermarked multimedia data is considered to be an efficient method to achieve the speech digital watermarking. This paper presents the performance comparison between various types of speech codec in order to determine an appropriate one to be used in digital watermarking applications. In the experiments, the speech signal encoded by four different types of speech codec, namely CELP, GSM, SBC and G.723.1codecs is embedded into a grayscale image, and theirs performance in term of speech recognition are compared. The method for embedding the speech signal into the host data is borrowed from a watermarking method based on the zerotrees of wavelet packet coefficients. To evaluate efficiency of the speech codec used in watermarking applications, the speech signal after being extracted from the attacked watermarked image will be played back to the listeners, and then be justified whether its content is intelligible or not.

  • PDF

A Comparison of Optimization Algorithms: An Assessment of Hydrodynamic Coefficients

  • Kim, Daewon
    • 해양환경안전학회지
    • /
    • 제24권3호
    • /
    • pp.295-301
    • /
    • 2018
  • This study compares optimization algorithms for efficient estimations of ship's hydrodynamic coefficients. Two constrained algorithms, the interior point and the sequential quadratic programming, are compared for the estimation. Mathematical optimization is designed to get optimal hydrodynamic coefficients for modelling a ship, and benchmark data are collected from sea trials of a training ship. A calibration for environmental influence and a sensitivity analysis for efficiency are carried out prior to implementing the optimization. The optimization is composed of three steps considering correlation between coefficients and manoeuvre characteristics. Manoeuvre characteristics of simulation results for both sets of optimized coefficients are close to each other, and they are also fit to the benchmark data. However, this similarity interferes with the comparison, and it is supposed that optimization conditions, such as designed variables and constraints, are not sufficient to compare them strictly. An enhanced optimization with additional sea trial measurement data should be carried out in future studies.

단일 루프 검지기를 이용한 차종 분류 알고리즘 개발 (Development of a Vehicle Classification Algorithm Using an Inductive Loop Detector on a Freeway)

  • 이승환;조한선;최기주
    • 대한교통학회지
    • /
    • 제14권1호
    • /
    • pp.135-154
    • /
    • 1996
  • This paper presents a heuristic algorithm for classifying vehicles using a single loop detector. The data used for the development of the algorithm are the frequency variation of a vehicle sensored from the circle-shaped loop detectors which are normal buried beneath the expressway. The pre-processing of data is required for the development of the algorithm that actually consists of two parts. One is both normalization of occupancy time and that with frequency variation, the other is finding of an adaptable number of sample size for each vehicle category and calculation of average value of normalized frequencies along with occupancy time that will be stored for comparison. Then, detected values are compared with those stored data to locate the most fitted pattern. After the normalization process, we developed some frameworks for comparison schemes. The fitted scales used were 10 and 15 frames in occupancy time(X-axis) and 10 and 15 frames in frequency variation (Y-axis). A combination of X-Y 10-15 frame turned out to be the most efficient scale of normalization producing 96 percent correct classification rate for six types of vehicle.

  • PDF