• Title/Summary/Keyword: Accuracy Assessment

Search Result 1,608, Processing Time 0.027 seconds

Development of a Simultaneous Analytical Method for Diquat, Paraquat and Chlormequat in Animal Products Using UPLC-MS/MS

  • Cho, Il Kyu;Rahman, Md. Musfiqur;Seol, Jae Ung;Noh, Hyun Ho;Jo, Hyeong-Wook;Moon, Joon-Kwan
    • Korean Journal of Environmental Agriculture
    • /
    • v.39 no.4
    • /
    • pp.368-374
    • /
    • 2020
  • BACKGROUND: The residual analysis of polar pesticides has remained a challenge. It is even more difficult to simultaneously analyze multiple polar pesticides. Diquat, paraquat, and chlormequat are typical examples of highly polar pesticides. The existing methods for the analysis of diquat, paraquat and chlormequat are complex and time consuming. Therefore, a simple, quick and effective method was developed in the represent study for simultaneous analysis of diquat, paraquat and chlormequat in animal products, meat and fat using UPLC-MS/MS. METHODS AND RESULTS: Sample extraction was carried out using acidified acetonitrile and water and re- extracted with acidified acetonitrile and combine the extracts followed by centrifugation. The extract was then cleaned up with a HLB cartridge after reconstitution with acidic acetonitrile and water. The method was validated in quintuplicate at three different concentrations. The limits of detection (LOD) and quantification (LOQ) were 0.0015 and 0.005 mg/L, respectively. Matrix suppression effect was observed for all of the analytes. A seven point matrix matched calibration curve was constructed for each of the compound resulted excellent linearity with determination coefficients (R2) ≥ 0.991. Accuracy and precision of the method was calculated from the recovery and repeatability and ranged from 62.4 to 119.7% with relative standard deviation less than 18.8%. CONCLUSION: The recovery and repeatability of the developed method were in the acceptable range according to the Codex Alimentarius guideline. The developed method can be applied for the routine monitoring of diquat, paraquat, and chlormequat in animal products, meat and fat.

Evaluation of Heat Waves Predictability of Korean Integrated Model (한국형수치예보모델 KIM의 폭염 예측 성능 검증)

  • Jung, Jiyoung;Lee, Eun-Hee;Park, Hye-Jin
    • Atmosphere
    • /
    • v.32 no.4
    • /
    • pp.277-295
    • /
    • 2022
  • The global weather prediction model, Korean Integrated Model (KIM), has been in operation since April 2020 by the Korea Meteorological Administration. This study assessed the performance of heat waves (HWs) in Korea in 2020. Case experiments during 2018-2020 were conducted to support the reliability of assessment, and the factors which affect predictability of the HWs were analyzed. Simulated expansion and retreat of the Tibetan High and North Pacific High during the 2020 HW had a good agreement with the analysis. However, the model showed significant cold biases in the maximum surface temperature. It was found that the temperature bias was highly related to underestimation of downward shortwave radiation at surface, which was linked to cloudiness. KIM tended to overestimate nighttime clouds that delayed the dissipation of cloud in the morning, which affected the shortage of downward solar radiation. The vertical profiles of temperature and moisture showed that cold bias and trapped moisture in the lower atmosphere produce favorable conditions for cloud formation over the Yellow Sea, which affected overestimation of cloud in downwind land. Sensitivity test was performed to reduce model bias, which was done by modulating moisture mixing parameter in the boundary layer scheme. Results indicated that the daytime temperature errors were reduced by increase in surface solar irradiance with enhanced cloud dissipation. This study suggested that not only the synoptic features but also the accuracy of low-level temperature and moisture condition played an important role in predicting the maximum temperature during the HWs in medium-range forecasts.

Sensitivity, specificity, and predictive value of cardiac symptoms assessed by emergency medical services providers in the diagnosis of acute myocardial infarction: a multi-center observational study

  • Park, Jeong Ho;Moon, Sung Woo;Kim, Tae Yun;Ro, Young Sun;Cha, Won Chul;Kim, Yu Jin;Shin, Sang Do
    • Clinical and Experimental Emergency Medicine
    • /
    • v.5 no.4
    • /
    • pp.264-271
    • /
    • 2018
  • Objective For patients with acute myocardial infarction (AMI), symptoms assessed by emergency medical services (EMS) providers have a critical role in prehospital treatment decisions. The purpose of this study was to evaluate the diagnostic accuracy of EMS provider-assessed cardiac symptoms of AMI. Methods Patients transported by EMS to 4 study hospitals from 2008 to 2012 were included. Using EMS and administrative emergency department databases, patients were stratified according to the presence of EMS-assessed cardiac symptoms and emergency department diagnosis of AMI. Cardiac symptoms were defined as chest pain, dyspnea, palpitations, and syncope. Disproportionate stratified sampling was used, and medical records of sampled patients were reviewed to identify an actual diagnosis of AMI. Using inverse probability weighting, verification bias-corrected diagnostic performance was estimated. Results Overall, 92,353 patients were enrolled in the study. Of these, 13,971 (15.1%) complained of cardiac symptoms to EMS providers. A total of 775 patients were sampled for hospital record review. The sensitivity, specificity, positive predictive value, and negative predictive value of EMS provider-assessed cardiac symptoms for the final diagnosis of AMI was 73.3% (95% confidence interval [CI], 70.8 to 75.7), 85.3% (95% CI, 85.3 to 85.4), 3.9% (95% CI, 3.6 to 4.2), and 99.7% (95% CI, 99.7 to 99.8), respectively. Conclusion We found that EMS provider-assessed cardiac symptoms had moderate sensitivity and high specificity for diagnosis of AMI. EMS policymakers can use these data to evaluate the pertinence of specific prehospital treatment of AMI.

Ultrasonographic Evaluation of Diffuse Thyroid Disease: a Study Comparing Grayscale US and Texture Analysis of Real-Time Elastography (RTE) and Grayscale US

  • Yoon, Jung Hyun;Lee, Eunjung;Lee, Hye Sun;Kim, Eun-Kyung;Moon, Hee Jung;Kwak, Jin Young
    • International journal of thyroidology
    • /
    • v.10 no.1
    • /
    • pp.14-23
    • /
    • 2017
  • Background and Objectives: To evaluate and compare the diagnostic performances of grayscale ultrasound (US) and quantitative parameters obtained from texture analysis of grayscale US and elastography images in evaluating patients with diffuse thyroid disease (DTD). Materials and Methods: From September to December 2012, 113 patients (mean age, $43.4{\pm}10.7years$) who had undergone preoperative staging US and elastography were included in this study. Assessment of the thyroid parenchyma for the diagnosis of DTD was made if US features suggestive of DTD were present. Nine histogram parameters were obtained from the grayscale US and elastography images, from which 'grayscale index' and 'elastography index' were calculated. Diagnostic performances of grayscale US, texture analysis using grayscale US and elastography were calculated and compared. Results: Of the 113 patients, 85 (75.2%) patients were negative for DTD and 28 (24.8%) were positive for DTD on pathology. The presence of US features suggestive of DTD showed significantly higher rates of DTD on pathology, 60.7% to 8.2% (p<0.001). Specificity, accuracy, and positive predictive value was highest in US features, 91.8%, 84.1%, and 87.6%, respectively (all ps<0.05). Grayscale index showed higher sensitivity and negative predictive value (NPV) than US features. All diagnostic performances were higher for grayscale index than the elastography index. Area under the curve of US features was the highest, 0.762, but without significant differences to grayscale index or mean of elastography (all ps>0.05). Conclusion: Diagnostic performances were the highest for grayscale US features in diagnosis of DTD. Grayscale index may be used as a complementary tool to US features for improving sensitivity and NPV.

Fragility-based performance evaluation of mid-rise reinforced concrete frames in near field and far field earthquakes

  • Ansari, Mokhtar;Safiey, Amir;Abbasi, Mehdi
    • Structural Engineering and Mechanics
    • /
    • v.76 no.6
    • /
    • pp.751-763
    • /
    • 2020
  • Available records of recent earthquakes show that near-field earthquakes have different characteristics than far-field earthquakes. In general, most of these unique characteristics of near-fault records can be attributed to their forward directivity. This phenomenon causes the records of ground motion normal to the fault to entail pulses with long periods in the velocity time history. The energy of the earthquake is almost accumulated in these pulses causing large displacements and, accordingly, severe damages in the building. Damage to structures caused by past earthquakes raises the need to assess the chance of future earthquake damage. There are a variety of methods to evaluate building seismic vulnerabilities with different computational cost and accuracy. In the meantime, fragility curves, which defines the possibility of structural damage as a function of ground motion characteristics and design parameters, are more common. These curves express the percentage of probability that the structural response will exceed the allowable performance limit at different seismic intensities. This study aims to obtain the fragility curve for low- and mid-rise structures of reinforced concrete moment frames by incremental dynamic analysis (IDA). These frames were exposed to an ensemble of 18 ground motions (nine records near-faults and nine records far-faults). Finally, after the analysis, their fragility curves are obtained using the limit states provided by HAZUS-MH 2.1. The result shows the near-fault earthquakes can drastically influence the fragility curves of the 6-story building while it has a minimal impact on those of the 3-story building.

Quantitative aspects of the hydrolysis of ginseng saponins: Application in HPLC-MS analysis of herbal products

  • Abashev, Mikhail;Stekolshchikova, Elena;Stavrianidi, Andrey
    • Journal of Ginseng Research
    • /
    • v.45 no.2
    • /
    • pp.246-253
    • /
    • 2021
  • Background: Ginseng is one of the most valuable herbal supplements. It is challenging to perform quality control of ginseng products due to the diversity of bioactive saponins in their composition. Acid or alkaline hydrolysis is often used for the structural elucidation of these saponins and sugars in their side chains. Complete transformation of the original ginsenosides into their aglycones during the hydrolysis is one of the ways to determine a total saponin group content. The main hurdle of this approach is the formation of various by-products that was reported by many authors. Methods: Separate HPLC assessment of the total protopanaxadiol, protopanaxatriol and ocotillol ginsenoside contents is a viable alternative to the determination of characteristic biomarkers of these saponin groups, such as ginsenoside Rf and pseudoginsenoside F11, which are commonly used for authentication of P. ginseng Meyer and P. quinquefolius L. samples respectively. Moreover, total ginsenoside content is an ideal aggregated parameter for standardization and quality control of ginseng-based medicines, because it can be directly applied for saponin dosage calculation. Results: Different hydrolysis conditions were tested to develop accurate quantification method for the elucidation of total ginsenoside contents in herbal products. Linearity, limits of quantification, limits of detection, accuracy and precision were evaluated for the developed HPLC-MS method. Conclusion: Alkaline hydrolysis results in fewer by-products than sugar elimination in acidic conditions. An equimolar response, as a key parameter for quantification, was established for several major ginsenosides. The developed approach has shown acceptable results in the analysis of several different herbal products.

Bitcoin and the Monetary System Revolution Changes

  • Alotaibi, Leena;Alsalmi, Azhar;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.156-160
    • /
    • 2021
  • Every day brings a new challenge to the humanities. Life nowadays needs accuracy, privacy, integrity, authenticity, and security to run life systems especially the monetary system. Things now differ from previous centuries. Multiple varieties in digital banking have opened the new and most advanced innovations for human beings. The monetary system is going to developed day by day to facilitate the public. Electronic money has amazed the world and gave a challenge to central banking. For this purpose, there will be a need for strict security, information, and confidence. Blockchain technology has opened new gateways. Bitcoin has become the most famous digital currency, which has created a thunderstorm in digital marketing. Blockchain, as a new Financial Technology, has satisfied all the security issues and satisfied doing business in secure ways that encourage investors to invest and keep the world business wheel. Assessment of the sustainability of implementing Bitcoin in financial institutions will be discussed. Every new system has its pros and cons in which a clear vision of what we are about to use can be sought. Through this research paper, a demonstration of the monetary system evolution, the new ways of doing business, some evidence in a form of academic cases will be demonstrated through comparison a table, a suggested method to transfer to the new system in safe mode will be proposed, and a conclusion will be concluded.

A Performance Comparison of Histogram Equalization Algorithms for Cervical Cancer Classification Model (평활화 알고리즘에 따른 자궁경부 분류 모델의 성능 비교 연구)

  • Kim, Youn Ji;Park, Ye Rang;Kim, Young Jae;Ju, Woong;Nam, Kyehyun;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.3
    • /
    • pp.80-85
    • /
    • 2021
  • We developed a model to classify the absence of cervical cancer using deep learning from the cervical image to which the histogram equalization algorithm was applied, and to compare the performance of each model. A total of 4259 images were used for this study, of which 1852 images were normal and 2407 were abnormal. And this paper applied Image Sharpening(IS), Histogram Equalization(HE), and Contrast Limited Adaptive Histogram Equalization(CLAHE) to the original image. Peak Signal-to-Noise Ratio(PSNR) and Structural Similarity index for Measuring image quality(SSIM) were used to assess the quality of images objectively. As a result of assessment, IS showed 81.75dB of PSNR and 0.96 of SSIM, showing the best image quality. CLAHE and HE showed the PSNR of 62.67dB and 62.60dB respectively, while SSIM of CLAHE was shown as 0.86, which is closer to 1 than HE of 0.75. Using ResNet-50 model with transfer learning, digital image-processed images are classified into normal and abnormal each. In conclusion, the classification accuracy of each model is as follows. 90.77% for IS, which shows the highest, 90.26% for CLAHE and 87.60% for HE. As this study shows, applying proper digital image processing which is for cervical images to Computer Aided Diagnosis(CAD) can help both screening and diagnosing.

Quality Assessment of GPS L2C Signals and Measurements

  • Yun, Seonghyeon;Lee, Hungkyu
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.10 no.1
    • /
    • pp.13-20
    • /
    • 2021
  • A series of numerical experiments with measurements observed at continuously operating reference stations (CORS) of the international GNSS services (IGS) and the national geographical information institute of Korea (NGII) have been intensively carried out to evaluate the quality of pseudo-ranges and carrier-phases of GPS L2C signal obtained by various receiver types, benign and harsh operational environment. In this analysis, some quality measures, such as signal-to-noise ratio (SNR), the magnitude of multipath, and the number of cycle slips, the pseudo-range and carrier phase obtaining rate were computed and compared. The SNR analysis revealed an impressive result that the trend in the SNR of C/A and the L2C comparably depends upon type of receivers. The result of multipath analysis also showed clearly different tendency depending on the receiver types. The reason for this inconsistent tendency was seemed to be that the different multipath mitigation algorithm built-in each receiver. The number of L2C cycle slip was less than P2(Y), and L2C measurements obtaining rate was higher than that of P2(Y) in three receiver types. In the harsh observational environment, L2C quality was not only superior to P2(Y) in all aspects such as SNR, multipath magnitude, the number of cycle slips, and measurement obtaining rate, but also it could maintain a level of quality equivalent to C/A. According to the results of this analysis, it's expected that improved positioning performance like accuracy and continuity can be got through the use of L2C instead of existing P2(Y).

Post-Fire Damage and Structural Performance Assessment of a Steel-Concrete Composite Bridge Superstructure Using Fluid-Structure Interaction Fire Analysis (FSI 화재해석을 이용한 강합성 교량 상부구조의 화재 후 손상 및 구조성능 평가)

  • Yun, Sung-Hwan;Gil, Heungbae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.6
    • /
    • pp.627-635
    • /
    • 2021
  • The fire damage and structural performance of a steel-concrete composite superstructure under a highway bridge exposed to fire loading was evaluated. To enhance the accuracy and efficiency of the numerical analysis, a proposed fluid-structure interaction fire analysis method was implemented in Ansys Fluent and Ansys Mechanical. The temperature distribution and performance evaluation of the steel-concrete composite superstructure according to the vertical distance from the fire source to the bottom flange were evaluated using the proposed analysis method. From the analysis, the temperature of the concrete slab and the bottom flange of the steel-concrete composite superstructure exceeded the critical temperature. Also, when the vertical distance from the fire source was 13 m or greater, the fire damage of the steel-concrete composite superstructure was found to within a safe limit.