• Title/Summary/Keyword: Validation Studies

Search Result 1,099, Processing Time 0.028 seconds

Validation of Self-Administered Dietary Assessment Questionnaires Developed for Japanese Subjects : Systematic Review

  • Satoshi Sasaki;Kim, Mi-Kyung
    • Journal of Community Nutrition
    • /
    • v.5 no.2
    • /
    • pp.83-92
    • /
    • 2003
  • Several self-administered dietary assessment questionnaires have recently been developed, validated, and used in nutritional epidemiological and clinical studies in Japan. This article describes recent evidence on development and validation of them. After extensive search of published articles both in English and Japanese languages, we identified 25 articles on 13 questionnaires of which validation studies have existed. Number of foods/menus assessed varied from 31 to 169 according to questionnaires. Eleven questionnaires were food frequency type, either with fixed portion size or semiquantitative, and two diet history types. All the 13 questionnaires were validated against intakes assessed with dietary record or 24-hour recall, and only two with biomarkers. Number of subjects used in the studies was between 23 and 350. All the studies used adult subjects. In the studies with dietary record or recall, the correlation coefficient for or orgy intake was between 0.22 and 0.65 (median = 0.44). Median correlation coefficient for nutrients was between 0.21 and 0.61. In the studies with biomarkers, serum marine-origin n-3 polyunsaturated fatty acids and carotenes, and urinary potassium seemed useful biomarkers. In conclusion, recent progress of this field in Japan is remarkable. But more research is needed for validation studies with biomarkers, and the development and validation of questionnaires for children and elderly subjects. (J Community Nutrition 5(2) : 83∼92,2003)

Scoping Review of Machine Learning and Deep Learning Algorithm Applications in Veterinary Clinics: Situation Analysis and Suggestions for Further Studies

  • Kyung-Duk Min
    • Journal of Veterinary Clinics
    • /
    • v.40 no.4
    • /
    • pp.243-259
    • /
    • 2023
  • Machine learning and deep learning (ML/DL) algorithms have been successfully applied in medical practice. However, their application in veterinary medicine is relatively limited, possibly due to a lack in the quantity and quality of relevant research. Because the potential demands for ML/DL applications in veterinary clinics are significant, it is important to note the current gaps in the literature and explore the possible directions for advancement in this field. Thus, a scoping review was conducted as a situation analysis. We developed a search strategy following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. PubMed and Embase databases were used in the initial search. The identified items were screened based on predefined inclusion and exclusion criteria. Information regarding model development, quality of validation, and model performance was extracted from the included studies. The current review found 55 studies that passed the criteria. In terms of target animals, the number of studies on industrial animals was similar to that on companion animals. Quantitative scarcity of prediction studies (n = 11, including duplications) was revealed in both industrial and non-industrial animal studies compared to diagnostic studies (n = 45, including duplications). Qualitative limitations were also identified, especially regarding validation methodologies. Considering these gaps in the literature, future studies examining the prediction and validation processes, which employ a prospective and multi-center approach, are highly recommended. Veterinary practitioners should acknowledge the current limitations in this field and adopt a receptive and critical attitude towards these new technologies to avoid their abuse.

Basic Principles of the Validation for Good Laboratory Practice Institutes

  • Cho, Kyu-Hyuk;Kim, Jin-Sung;Jeon, Man-Soo;Lee, Kyu-Hong;Chung, Moon-Koo;Song, Chang-Woo
    • Toxicological Research
    • /
    • v.25 no.1
    • /
    • pp.1-8
    • /
    • 2009
  • Validation specifies and coordinates all relevant activities to ensure compliance with good laboratory practices (GLP) according to suitable international standards. This includes validation activities of past, present and future for the best possible actions to ensure the integrity of non-clinical laboratory data. Recently, validation has become increasingly important, not only in good manufacturing practice (GMP) institutions but also in GLP facilities. In accordance with the guideline for GLP regulations, all equipments used to generate, measure, or assess data should undergo validation to ensure that this equipment is of appropriate design and capacity and that it will consistently function as intended. Therefore, the implantation of validation processes is considered to be an essential step in a global institution. This review describes the procedures and documentations required for validation of GLP. It introduces basic elements such as the validation master plan, risk assessment, gap analysis, design qualification, installation qualification, operational qualification, performance qualification, calibration, traceability, and revalidation.

HFFB technique and its validation studies

  • Xie, Jiming;Garber, Jason
    • Wind and Structures
    • /
    • v.18 no.4
    • /
    • pp.375-389
    • /
    • 2014
  • The high-frequency force-balance (HFFB) technique and its subsequent improvements are reviewed in this paper, including a discussion about nonlinear mode shape corrections, multi-force balance measurements, and using HFFB model to identify aeroelastic parameters. To apply the HFFB technique in engineering practice, various validation studies have been conducted. This paper presents the results from an analytical validation study for a simple building with nonlinear mode shapes, three experimental validation studies for more complicated buildings, and a field measurement comparison for a super-tall building in Hong Kong. The results of these validations confirm that the improved HFFB technique is generally adequate for engineering applications. Some technical limitations of HFFB are also discussed in this paper, especially for higher-order mode response that could be considerable for super tall buildings.

Validity of Instrument Development Research in Korean Nursing Research (한국의 도구개발 간호연구에서의 타당도에 대한 고찰)

  • Lee, Kyunghee;Shin, Sujin
    • Journal of Korean Academy of Nursing
    • /
    • v.43 no.6
    • /
    • pp.697-703
    • /
    • 2013
  • Purpose: This integrative review study was done to analyze methods used for validation studies in Korean nursing research. Methods: In this study, the literature on instrument development in nursing research from Research Information Sharing Service (RISS) and major nursing journal databases in Korea were examined. The MeSH search terms included 'nursing', 'instrument', 'instrument development', 'validation' and 189 articles were included in the review. Results: The most frequently reported validity type was content validity, followed by construct validity, and criterion validity. One third reported a single type of validity, and 15% of the studies demonstrated three kinds of validity at the same time. In about 40% of the studies, both content and construct validity were examined. Conclusion: The results of the study indicate that it is necessary to provide a wider variety of evidence to establish whether instruments are valid enough to use in nursing research.

An Evaluation Study on Artificial Intelligence Data Validation Methods and Open-source Frameworks (인공지능 데이터 품질검증 기술 및 오픈소스 프레임워크 분석 연구)

  • Yun, Changhee;Shin, Hokyung;Choo, Seung-Yeon;Kim, Jaeil
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1403-1413
    • /
    • 2021
  • In this paper, we investigate automated data validation techniques for artificial intelligence training, and also disclose open-source frameworks, such as Google's TensorFlow Data Validation (TFDV), that support automated data validation in the AI model development process. We also introduce an experimental study using public data sets to demonstrate the effectiveness of the open-source data validation framework. In particular, we presents experimental results of the data validation functions for schema testing and discuss the limitations of the current open-source frameworks for semantic data. Last, we introduce the latest studies for the semantic data validation using machine learning techniques.

Validation Process of HPLC Assay Methods of Drugs in Biological Samples (생체시료내 약물의 HPLC 분석법에 대한 유효성 검토방법)

  • Chi, Sang-Cheol;Jun, H.-Won
    • Journal of Pharmaceutical Investigation
    • /
    • v.21 no.3
    • /
    • pp.179-188
    • /
    • 1991
  • An HPLC assay method of a drug to be applied to the pharmacokinetic studies of the drug should be completely validated. The validation process for an HPLC assay method in a biological sample was discussed using the data obtained from the development of HPLC method for the simultaneous quantitation of verapamil and norverapamil in human serum. The validation criteria included were specificity, linearity, accuracy, precision, sensitivity, recovery, drug stability, and ruggedness of an assay method.

  • PDF

A convenient approach for penalty parameter selection in robust lasso regression

  • Kim, Jongyoung;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.651-662
    • /
    • 2017
  • We propose an alternative procedure to select penalty parameter in $L_1$ penalized robust regression. This procedure is based on marginalization of prior distribution over the penalty parameter. Thus, resulting objective function does not include the penalty parameter due to marginalizing it out. In addition, its estimating algorithm automatically chooses a penalty parameter using the previous estimate of regression coefficients. The proposed approach bypasses cross validation as well as saves computing time. Variable-wise penalization also performs best in prediction and variable selection perspectives. Numerical studies using simulation data demonstrate the performance of our proposals. The proposed methods are applied to Boston housing data. Through simulation study and real data application we demonstrate that our proposals are competitive to or much better than cross-validation in prediction, variable selection, and computing time perspectives.

Statistical Issues in Genomic Cohort Studies (유전체 코호트 연구의 주요 통계학적 과제)

  • Park, So-Hee
    • Journal of Preventive Medicine and Public Health
    • /
    • v.40 no.2
    • /
    • pp.108-113
    • /
    • 2007
  • When conducting large-scale cohort studies, numerous statistical issues arise from the range of study design, data collection, data analysis and interpretation. In genomic cohort studies, these statistical problems become more complicated, which need to be carefully dealt with. Rapid technical advances in genomic studies produce enormous amount of data to be analyzed and traditional statistical methods are no longer sufficient to handle these data. In this paper, we reviewed several important statistical issues that occur frequently in large-scale genomic cohort studies, including measurement error and its relevant correction methods, cost-efficient design strategy for main cohort and validation studies, inflated Type I error, gene-gene and gene-environment interaction and time-varying hazard ratios. It is very important to employ appropriate statistical methods in order to make the best use of valuable cohort data and produce valid and reliable study results.

A Cross-Validation of SeismicVulnerability Assessment Model: Application to Earthquake of 9.12 Gyeongju and 2017 Pohang (지진 취약성 평가 모델 교차검증: 경주(2016)와 포항(2017) 지진을 대상으로)

  • Han, Jihye;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.649-655
    • /
    • 2021
  • This study purposes to cross-validate its performance by applying the optimal seismic vulnerability assessment model based on previous studies conducted in Gyeongju to other regions. The test area was Pohang City, the occurrence site for the 2017 Pohang Earthquake, and the dataset was built the same influencing factors and earthquake-damaged buildings as in the previous studies. The validation dataset was built via random sampling, and the prediction accuracy was derived by applying it to a model based on a random forest (RF) of Gyeongju. The accuracy of the model success and prediction in Gyeongju was 100% and 94.9%, respectively, and as a result of confirming the prediction accuracy by applying the Pohang validation dataset, it appeared as 70.4%.