• Title/Summary/Keyword: Automatic validation

Search Result 186, Processing Time 0.02 seconds

Mid-infrared (MIR) spectroscopy for the detection of cow's milk in buffalo milk

  • Anna Antonella, Spina;Carlotta, Ceniti;Cristian, Piras;Bruno, Tilocca;Domenico, Britti;Valeria Maria, Morittu
    • Journal of Animal Science and Technology
    • /
    • v.64 no.3
    • /
    • pp.531-538
    • /
    • 2022
  • In Italy, buffalo mozzarella is a largely sold and consumed dairy product. The fraudulent adulteration of buffalo milk with cheaper and more available milk of other species is very frequent. In the present study, Fourier transform infrared spectroscopy (FTIR), in combination with multivariate analysis by partial least square (PLS) regression, was applied to quantitatively detect the adulteration of buffalo milk with cow milk by using a fully automatic equipment dedicated to the routine analysis of the milk composition. To enhance the heterogeneity, cow and buffalo bulk milk was collected for a period of over three years from different dairy farms. A total of 119 samples were used for the analysis to generate 17 different concentrations of buffalo-cow milk mixtures. This procedure was used to enhance variability and to properly randomize the trials. The obtained calibration model showed an R2 ≥ 0.99 (R2 cal. = 0.99861; root mean square error of cross-validation [RMSEC] = 2.04; R2 val. = 0.99803; root mean square error of prediction [RMSEP] = 2.84; root mean square error of cross-validation [RMSECV] = 2.44) suggesting that this method could be successfully applied in the routine analysis of buffalo milk composition, providing rapid screening for possible adulteration with cow's milk at no additional cost.

The World as Seen from Venice (1205-1533) as a Case Study of Scalable Web-Based Automatic Narratives for Interactive Global Histories

  • NANETTI, Andrea;CHEONG, Siew Ann
    • Asian review of World Histories
    • /
    • v.4 no.1
    • /
    • pp.3-34
    • /
    • 2016
  • This introduction is both a statement of a research problem and an account of the first research results for its solution. As more historical databases come online and overlap in coverage, we need to discuss the two main issues that prevent 'big' results from emerging so far. Firstly, historical data are seen by computer science people as unstructured, that is, historical records cannot be easily decomposed into unambiguous fields, like in population (birth and death records) and taxation data. Secondly, machine-learning tools developed for structured data cannot be applied as they are for historical research. We propose a complex network, narrative-driven approach to mining historical databases. In such a time-integrated network obtained by overlaying records from historical databases, the nodes are actors, while thelinks are actions. In the case study that we present (the world as seen from Venice, 1205-1533), the actors are governments, while the actions are limited to war, trade, and treaty to keep the case study tractable. We then identify key periods, key events, and hence key actors, key locations through a time-resolved examination of the actions. This tool allows historians to deal with historical data issues (e.g., source provenance identification, event validation, trade-conflict-diplomacy relationships, etc.). On a higher level, this automatic extraction of key narratives from a historical database allows historians to formulate hypotheses on the courses of history, and also allow them to test these hypotheses in other actions or in additional data sets. Our vision is that this narrative-driven analysis of historical data can lead to the development of multiple scale agent-based models, which can be simulated on a computer to generate ensembles of counterfactual histories that would deepen our understanding of how our actual history developed the way it did. The generation of such narratives, automatically and in a scalable way, will revolutionize the practice of history as a discipline, because historical knowledge, that is the treasure of human experiences (i.e. the heritage of the world), will become what might be inherited by machine learning algorithms and used in smart cities to highlight and explain present ties and illustrate potential future scenarios and visionarios.

Development of Web-based Off-site Consequence Analysis Program and its Application for ILRT Extension (격납건물종합누설률시험 주기연장을 위한 웹기반 소외결말분석 프로그램 개발 및 적용)

  • Na, Jang-Hwan;Hwang, Seok-Won;Oh, Ji-Yong
    • Journal of the Korean Society of Safety
    • /
    • v.27 no.5
    • /
    • pp.219-223
    • /
    • 2012
  • For an off-site consequence analysis at nuclear power plant, MELCOR Accident Consequence Code System(MACCS) II code is widely used as a software tool. In this study, the algorithm of web-based off-site consequence analysis program(OSCAP) using the MACCS II code was developed for an Integrated Leak Rate Test (ILRT) interval extension and Level 3 probabilistic safety assessment(PSA), and verification and validation(V&V) of the program was performed. The main input data for the MACCS II code are meteorological, population distribution and source term information. However, it requires lots of time and efforts to generate the main input data for an off-site consequence analysis using the MACCS II code. For example, the meteorological data are collected from each nuclear power site in real time, but the formats of the raw data collected are different from each site. To reduce the efforts and time for risk assessments, the web-based OSCAP has an automatic processing module which converts the format of the raw data collected from each site to the input data format of the MACCS II code. The program also provides an automatic function of converting the latest population data from Statistics Korea, the National Statistical Office, to the population distribution input data format of the MACCS II code. For the source term data, the program includes the release fraction of each source term category resulting from modular accident analysis program(MAAP) code analysis and the core inventory data from ORIGEN. These analysis results of each plant in Korea are stored in a database module of the web-based OSCAP, so the user can select the defaulted source term data of each plant without handling source term input data.

Automatic Software Requirement Pattern Extraction Method Using Machine Learning of Requirement Scenario (요구사항 시나리오 기계 학습을 이용한 자동 소프트웨어 요구사항 패턴 추출 기법)

  • Ko, Deokyoon;Park, Sooyong;Kim, Suntae;Yoo, Hee-Kyung;Hwang, Mansoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.263-271
    • /
    • 2016
  • Software requirement analysis is necessary for successful software development project. Specially, incomplete requirement is the most influential causes of software project failure. Incomplete requirement can bring late delay and over budget because of the misunderstanding and ambiguous criteria for project validation. Software requirement patterns can help writing more complete requirement. These can be a reference model and standards when author writing or validating software requirement. Furthermore, when a novice writes the software scenario, the requirement patterns can be one of the guideline. In this paper proposes an automatic approach to identifying software scenario patterns from various software scenarios. In this paper, we gathered 83 scenarios from eight industrial systems, and show how to extract 54 scenario patterns and how to find omitted action of the scenario using extracted patterns for the feasibility of the approach.

Development of Automatic Airborne Image Orthorectification Using GPS/INS and LIDAR Data (GPS/INS와 LIDAR자료를 이용한 자동 항공영상 정사보정 개발)

  • Jang Jae-Dong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.693-699
    • /
    • 2006
  • Digital airborne image must be precisely orthorectified to become geographical information. For orthorectification of airborne images, GPS/INS (Global Positioning System/Inertial Navigation System) and LIDAR (LIght Detection And Ranging) elevation data were employed. In this study, 635 frame airborne images were produced and LIDAR data were converted to raster image for applying to image orthorectification. To derive images with constant brightness, flat field correction was applied to images. The airborne images were geometrically corrected by calculating internal orientation and external orientation using GPS/INS data and then orthorectified using LIDAR digital elevation model image. The precision of orthorectified images was validated by collecting 50 ground control points from arbitrary five images and LIDAR intensity image. As validation result, RMSE (Root Mean Square Error) was 0.387 as almost same as only two times of pixel spatial resolution. It is possible that this automatic orthorectification method of airborne image with higher precision is applied to airborne image industry.

Grading System of Movie Review through the Use of An Appraisal Dictionary and Computation of Semantic Segments (감정어휘 평가사전과 의미마디 연산을 이용한 영화평 등급화 시스템)

  • Ko, Min-Su;Shin, Hyo-Pil
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.4
    • /
    • pp.669-696
    • /
    • 2010
  • Assuming that the whole meaning of a document is a composition of the meanings of each part, this paper proposes to study the automatic grading of movie reviews which contain sentimental expressions. This will be accomplished by calculating the values of semantic segments and performing data classification for each review. The ARSSA(The Automatic Rating System for Sentiment analysis using an Appraisal dictionary) system is an effort to model decision making processes in a manner similar to that of the human mind. This aims to resolve the discontinuity between the numerical ranking and textual rationalization present in the binary structure of the current review rating system: {rate: review}. This model can be realized by performing analysis on the abstract menas extracted from each review. The performance of this system was experimentally calculated by performing a 10-fold Cross-Validation test of 1000 reviews obtained from the Naver Movie site. The system achieved an 85% F1 Score when compared to predefined values using a predefined appraisal dictionary.

  • PDF

Automatic Construction of SHACL Schemas for RDF Knowledge Graphs Generated by R2RML Mappings

  • Choi, Ji-Woong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.8
    • /
    • pp.9-21
    • /
    • 2020
  • With the proliferation of RDF knowledge graphs(KGs), there arose a need of a standardized schema representation of the graph model for effective data interchangeability and interoperability. The need resulted in the development of SHACL specification to describe and validate RDF graph's structure by W3C. Relational databases(RDBs) are one of major sources for acquiring structured knowledge. The standard for automatic generation of RDF KGs from RDBs is R2RML, which is also developed by W3C. Since R2RML is designed to generate only RDF data graphs from RDBs, additional manual tasks are required to create the schemas for the graphs. In this paper we propose an approach to automatically generate SHACL schemas for RDF KGs populated by R2RML mappings. The key of our approach is that the SHACL shemas are built only from R2RML documents. We describe an implementation of our appraoch. Then, we show the validity of our approach with R2RML test cases designed by W3C.

Korean Syntactic Rules using Composite Labels (복합 레이블을 적용한 한국어 구문 규칙)

  • 김성용;이공주;최기선
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.235-244
    • /
    • 2004
  • We propose a format of a binary phrase structure grammar with composite labels. The grammar adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labeling algorithm. Since the proposed rule description scheme is binary and uses only part-of-speech information, it can readily be used in dependency grammar and be applied to other languages as well. In the best-1 context-free cross validation on 31,080 tree-tagged corpus, the labeled precision is 79.30%, which outperforms phrase structure grammar and dependency grammar by 5% and by 4%, respectively. It shows that the proposed rule description scheme is effective for parsing Korean.

A Novel, Deep Learning-Based, Automatic Photometric Analysis Software for Breast Aesthetic Scoring

  • Joseph Kyu-hyung Park;Seungchul Baek;Chan Yeong Heo;Jae Hoon Jeong;Yujin Myung
    • Archives of Plastic Surgery
    • /
    • v.51 no.1
    • /
    • pp.30-35
    • /
    • 2024
  • Background Breast aesthetics evaluation often relies on subjective assessments, leading to the need for objective, automated tools. We developed the Seoul Breast Esthetic Scoring Tool (S-BEST), a photometric analysis software that utilizes a DenseNet-264 deep learning model to automatically evaluate breast landmarks and asymmetry indices. Methods S-BEST was trained on a dataset of frontal breast photographs annotated with 30 specific landmarks, divided into an 80-20 training-validation split. The software requires the distances of sternal notch to nipple or nipple-to-nipple as input and performs image preprocessing steps, including ratio correction and 8-bit normalization. Breast asymmetry indices and centimeter-based measurements are provided as the output. The accuracy of S-BEST was validated using a paired t-test and Bland-Altman plots, comparing its measurements to those obtained from physical examinations of 100 females diagnosed with breast cancer. Results S-BEST demonstrated high accuracy in automatic landmark localization, with most distances showing no statistically significant difference compared with physical measurements. However, the nipple to inframammary fold distance showed a significant bias, with a coefficient of determination ranging from 0.3787 to 0.4234 for the left and right sides, respectively. Conclusion S-BEST provides a fast, reliable, and automated approach for breast aesthetic evaluation based on 2D frontal photographs. While limited by its inability to capture volumetric attributes or multiple viewpoints, it serves as an accessible tool for both clinical and research applications.

Developing Surface Water Quality Modeling Framework Considering Spatial Resolution of Pollutant Load Estimation for Saemangeum Using HSPF (오염원 산정단위 수준의 소유역 세분화를 고려한 새만금유역 수문·수질모델링 적용성 검토)

  • Seong, Chounghyun;Hwang, Syewoon;Oh, Chansung;Cho, Jaepil
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.59 no.3
    • /
    • pp.83-96
    • /
    • 2017
  • This study presented a surface water quality modeling framework considering the spatial resolution of pollutant load estimation to better represent stream water quality characteristics in the Saemangeum watershed which has been focused on keeping its water resources sustainable after the Saemangeum embankment construction. The watershed delineated into 804 sub-watersheds in total based on the administrative districts, which were units for pollutant load estimation and counted as 739 in the watershed, Digital Elevation Model (DEM), and agricultural structures such as drainage canal. The established model consists of 7 Mangyung (MG) sub-models, 7 Dongjin (DJ) sub-models, and 3 Reclaimed sub-models, and the sub-models were simulated in a sequence of upstream to downstream based on its connectivity. The hydrologic calibration and validation of the model were conducted from 14 flow stations for the period of 2009 and 2013 using an automatic calibration scheme. The model performance to the hydrologic stations for calibration and validation showed that the Nash-Sutcliffe coefficient (NSE) ranged from 0.66 to 0.97, PBIAS were -31.0~16.5 %, and $R^2$ were from 0.75 to 0.98, respectively in a monthly time step and therefore, the model showed its hydrological applicability to the watershed. The water quality calibration and validation were conducted based on the 29 stations with the water quality constituents of DO, BOD, TN, and TP during the same period with the flow. The water quality model were manually calibrated, and generally showed an applicability by resulting reasonable variability and seasonality, although some exceptional simulation results were identified in some upstream stations under low-flow conditions. The spatial subdivision in the model framework were compared with previous studies to assess the consideration of administrative boundaries for watershed delineation, and this study outperformed in flow, but showed a similar level of model performance in water quality. The framework presented here can be applicable in a regional scale watershed as well as in a need of fine-resolution simulation.