• Title/Summary/Keyword: Data Comparison

Search Result 12,537, Processing Time 0.04 seconds

Comparison of Topex/Poseidon sea surface heights and Tide Gauge sea levels in the South Indian Ocean

  • Yoon, Hong-Joo
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.70-75
    • /
    • 1998
  • The comparison of Topex/Poseidon sea surface heights and Tide Gauge sea levels was studied in the South Indian Ocean after Topex/Poseidon mission of about 3 years (11- 121 cycles) from January 1993 through December 1995. The user's handbook (AVISO) for sea surface height data process was used in this study Topex/Poseidon sea suface heights ($\zeta$$^{T/P}$), satellite data at the point which is very closed to Tide Gauge station, were chosen in the same latitude of Tide Gauge station. These data were re-sampled by a linear interpolation with the interval of about 10 days, and were filtered by the gaussian filter with a 60 day-window. Tide Gauge sea levels ($\zeta$$^{Argos}$, $\zeta$$^{In-situ}$ and $\zeta$$^{Model}$), were also treated with the same method as satellite data. The main conclusions obtained from the root-mean-square and correlation coefficient were as follows: 1) to Produce Tide Gauge sea levels from bottom pressure, in-situ data of METEO-FRANCE showed very good values against to the model data of ECMWF and 2) to compare Topex/Poseidon sea surface heights of Tide Gauge sea levels, the results of the open sea areas were better than those of the coast and island areas.

  • PDF

Comparison of Reliability Estimation Methods for Ammunition Systems with Quantal-response Data (가부반응 데이터 특성을 가지는 탄약 체계의 신뢰도 추정방법 비교)

  • Ryu, Jang-Hee;Back, Seung-Jun;Son, Young-Kap
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.13 no.6
    • /
    • pp.982-989
    • /
    • 2010
  • This paper shows accuracy comparison results of reliability estimation methods for one-shot systems such as ammunitions. Quantal-response data, following a binomial distribution at each sampling time, characterizes lifetimes of one-shot systems. Various quantal-response data of different sample sizes are simulated using lifetime data randomly sampled from assumed weibull distributions with different shape parameters but the identical scale parameter in this paper. Then, reliability estimation methods in open literature are applied to the simulated quantal-response data to estimate true reliability over time. Rankings in estimation accuracy for different sample sizes are determined using t-test of SSE. Furthermore, MSE at each time, including both bias and variance of estimated reliability metrics for each method are analyzed to investigate how much both bias and variance contribute the SSE. From the MSE analysis, MSE provides reliability estimation trend for each method. Parametric estimation method provides more accurate reliability estimation results than the other methods for most of sample sizes.

Pavement Performance Model Development Using Bayesian Algorithm (베이지안 기법을 활용한 공용성 모델개발 연구)

  • Mun, Sungho
    • International Journal of Highway Engineering
    • /
    • v.18 no.1
    • /
    • pp.91-97
    • /
    • 2016
  • PURPOSES : The objective of this paper is to develop a pavement performance model based on the Bayesian algorithm, and compare the measured and predicted performance data. METHODS : In this paper, several pavement types such as SMA (stone mastic asphalt), PSMA (polymer-modified stone mastic asphalt), PMA (polymer-modified asphalt), SBS (styrene-butadiene-styrene) modified asphalt, and DGA (dense-graded asphalt) are modeled in terms of the performance evaluation of pavement structures, using the Bayesian algorithm. RESULTS : From case studies related to the performance model development, the statistical parameters of the mean value and standard deviation can be obtained through the Bayesian algorithm, using the initial performance data of two different pavement cases. Furthermore, an accurate performance model can be developed, based on the comparison between the measured and predicted performance data. CONCLUSIONS : Based on the results of the case studies, it is concluded that the determined coefficients of the nonlinear performance models can be used to accurately predict the long-term performance behaviors of DGA and modified asphalt concrete pavements. In addition, the developed models were evaluated through comparison studies between the initial measurement and prediction data, as well as between the final measurement and prediction data. In the model development, the initial measured data were used.

An Optimization Method for the Calculation of SCADA Main Grid's Theoretical Line Loss Based on DBSCAN

  • Cao, Hongyi;Ren, Qiaomu;Zou, Xiuguo;Zhang, Shuaitang;Qian, Yan
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1156-1170
    • /
    • 2019
  • In recent years, the problem of data drifted of the smart grid due to manual operation has been widely studied by researchers in the related domain areas. It has become an important research topic to effectively and reliably find the reasonable data needed in the Supervisory Control and Data Acquisition (SCADA) system has become an important research topic. This paper analyzes the data composition of the smart grid, and explains the power model in two smart grid applications, followed by an analysis on the application of each parameter in density-based spatial clustering of applications with noise (DBSCAN) algorithm. Then a comparison is carried out for the processing effects of the boxplot method, probability weight analysis method and DBSCAN clustering algorithm on the big data driven power grid. According to the comparison results, the performance of the DBSCAN algorithm outperforming other methods in processing effect. The experimental verification shows that the DBSCAN clustering algorithm can effectively screen the power grid data, thereby significantly improving the accuracy and reliability of the calculation result of the main grid's theoretical line loss.

Comparison of Loss Function for Multi-Class Classification of Collision Events in Imbalanced Black-Box Video Data (불균형 블랙박스 동영상 데이터에서 충돌 상황의 다중 분류를 위한 손실 함수 비교)

  • Euisang Lee;Seokmin Han
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.49-54
    • /
    • 2024
  • Data imbalance is a common issue encountered in classification problems, stemming from a significant disparity in the number of samples between classes within the dataset. Such data imbalance typically leads to problems in classification models, including overfitting, underfitting, and misinterpretation of performance metrics. Methods to address this issue include resampling, augmentation, regularization techniques, and adjustment of loss functions. In this paper, we focus on loss function adjustment, particularly comparing the performance of various configurations of loss functions (Cross Entropy, Balanced Cross Entropy, two settings of Focal Loss: 𝛼 = 1 and 𝛼 = Balanced, Asymmetric Loss) on Multi-Class black-box video data with imbalance issues. The comparison is conducted using the I3D, and R3D_18 models.

A Study on Comparison with the Methods of Ordered Categorical Data of Analysis (순서 범주형 자료해석법의 비교 연구)

  • 김홍준;송서일
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.20 no.44
    • /
    • pp.207-215
    • /
    • 1997
  • This paper deals with a comparison between Taguchi's accumulation analysis method and Nair test on the ordered categorical data from an industrial experiment for quality improvement. a result of Taguchi's accumulation analysis method is shown to have reasonable power for detecting location effects, while Nair test identifies the location and dispersion effects separately, Accordingly, Taguchi's accumulation analysis needs to develop methods for detecting dispersion effects as well as location effects. In addition this paper rewmmends models for analyzing ordered categorical data, for examples, the cumulative legit model, mean response model etc Successively simple, reasonable methods should be introduced more likely to be used by the practitioners.

  • PDF

Comparison of Korean and Japanese Rice Cultivars in Terms of Physicochemical Properties (I) The Comparison of Korean and Japanese Rice by NIR and Chemical Analysis (한국 쌀과 일본 쌀의 물리화학적 특성 연구 (I) NIR을 사용한 한국 쌀과 일본 쌀의 품질 비교)

  • 김혁일
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.14 no.2
    • /
    • pp.135-144
    • /
    • 2004
  • A total of 40 Korean and Japanese rice varieties were evaluated for their main chemical components, physical properties, cooking quality, pasting properties, and instrumental measurements. Based on their quality evaluations, it was concluded that Korean and Japanese rice varieties were not significantly different in the basic components of NIR (Near Infra Red) data and the chemical analysis from the uncooked brown and milled rices. Korean rice had a little bit higher protein and amylose contents but much lower fat acidity than those of Japanese rice from the chemical analysis. From all the data of three different kinds of NIR methods, Korean and Japanese milled rice were very similar except the taste score. Japanese rice showed a slightly higher taste score, a little bit higher lightness and whiteness, but lower yellowness and redness than Korean one. From all those data of NIR and the chemical analysis, Korean and Japanese rices had very similar components except the fat content.

  • PDF

Performance Comparison on Speech Codecs for Digital Watermarking Applications

  • Mamongkol, Y.;Amornraksa, T.
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.466-469
    • /
    • 2002
  • Using intelligent information contained within the speech to identify the specific hidden data in the watermarked multimedia data is considered to be an efficient method to achieve the speech digital watermarking. This paper presents the performance comparison between various types of speech codec in order to determine an appropriate one to be used in digital watermarking applications. In the experiments, the speech signal encoded by four different types of speech codec, namely CELP, GSM, SBC and G.723.1codecs is embedded into a grayscale image, and theirs performance in term of speech recognition are compared. The method for embedding the speech signal into the host data is borrowed from a watermarking method based on the zerotrees of wavelet packet coefficients. To evaluate efficiency of the speech codec used in watermarking applications, the speech signal after being extracted from the attacked watermarked image will be played back to the listeners, and then be justified whether its content is intelligible or not.

  • PDF

A Comparison of Optimization Algorithms: An Assessment of Hydrodynamic Coefficients

  • Kim, Daewon
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.3
    • /
    • pp.295-301
    • /
    • 2018
  • This study compares optimization algorithms for efficient estimations of ship's hydrodynamic coefficients. Two constrained algorithms, the interior point and the sequential quadratic programming, are compared for the estimation. Mathematical optimization is designed to get optimal hydrodynamic coefficients for modelling a ship, and benchmark data are collected from sea trials of a training ship. A calibration for environmental influence and a sensitivity analysis for efficiency are carried out prior to implementing the optimization. The optimization is composed of three steps considering correlation between coefficients and manoeuvre characteristics. Manoeuvre characteristics of simulation results for both sets of optimized coefficients are close to each other, and they are also fit to the benchmark data. However, this similarity interferes with the comparison, and it is supposed that optimization conditions, such as designed variables and constraints, are not sufficient to compare them strictly. An enhanced optimization with additional sea trial measurement data should be carried out in future studies.

Development of a Vehicle Classification Algorithm Using an Inductive Loop Detector on a Freeway (단일 루프 검지기를 이용한 차종 분류 알고리즘 개발)

  • 이승환;조한선;최기주
    • Journal of Korean Society of Transportation
    • /
    • v.14 no.1
    • /
    • pp.135-154
    • /
    • 1996
  • This paper presents a heuristic algorithm for classifying vehicles using a single loop detector. The data used for the development of the algorithm are the frequency variation of a vehicle sensored from the circle-shaped loop detectors which are normal buried beneath the expressway. The pre-processing of data is required for the development of the algorithm that actually consists of two parts. One is both normalization of occupancy time and that with frequency variation, the other is finding of an adaptable number of sample size for each vehicle category and calculation of average value of normalized frequencies along with occupancy time that will be stored for comparison. Then, detected values are compared with those stored data to locate the most fitted pattern. After the normalization process, we developed some frameworks for comparison schemes. The fitted scales used were 10 and 15 frames in occupancy time(X-axis) and 10 and 15 frames in frequency variation (Y-axis). A combination of X-Y 10-15 frame turned out to be the most efficient scale of normalization producing 96 percent correct classification rate for six types of vehicle.

  • PDF