• Title/Summary/Keyword: Reference Data Set

Search Result 433, Processing Time 0.024 seconds

An Implementation of Markerless Augmented Reality Using Efficient Reference Data Sets (효율적인 레퍼런스 데이터 그룹의 활용에 의한 마커리스 증강현실의 구현)

  • Koo, Ja-Myoung;Cho, Tai-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.11
    • /
    • pp.2335-2340
    • /
    • 2009
  • This paper presents how to implement Markerless Augmented Reality and how to create and apply reference data sets. There are three parts related with implementation: setting camera, creation of reference data set, and tracking. To create effective reference data sets, we need a 3D model such as CAD model. It is also required to create reference data sets from various viewpoints. We extract the feature points from the mode1 image and then extract 3D positions corresponding to the feature points using ray tracking. These 2D/3D correspondence point sets constitute a reference data set of the model. Reference data sets are constructed for various viewpoints of the model. Fast tracking can be done using a reference data set the most frequently matched with feature points of the present frame and model data near the reference data set.

An Implementation of Markerless Augmented Reality and Creation and Application of Efficient Reference Data Sets (마커리스 증강현실의 구현과 효율적인 레퍼런스 데이터 그룹의 생성 및 활용)

  • Koo, Ja-Myoung;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.204-207
    • /
    • 2009
  • This paper presents how to implement Markerless Augmented Reality and how to create and apply reference data sets. There are three parts related with implementation: setting camera, creation of reference data set, and tracking. To create effective reference data sets, we need a 3D model such as CAD model. It is also required to create reference data sets from various viewpoints. We extract the feature points from the model image and then extract 3D positions corresponding to the feature points using ray tracking. These 2D/3D correspondence point sets constitute a reference data set of the model. Reference data sets are constructed for various viewpoints of the model. Fast tracking can be done using a reference data set the most frequently matched with feature points of the present frame and model data near the reference data set.

  • PDF

Development of an Editor for Reference Data Library Based on ISO 15926 (ISO 15926 기반의 참조 데이터 라이브러리 편집기의 개발)

  • Jeon, Youngjun;Byon, Su-Jin;Mun, Duhwan
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.390-401
    • /
    • 2014
  • ISO 15926 is an international standard for integration of lifecycle data for process plants including oil and gas facilities. From the viewpoint of information modeling, ISO 15926 Parts 2 provides the general data model that is designed to be used in conjunction with reference data. Reference data are standard instances that represent classes, objects, properties, and templates common to a number of users, process plants, or both. ISO 15926 Parts 4 and 7 provide the initial set of classes, objects, properties and the initial set of templates, respectively. User-defined reference data specific to companies or organizations are defined by inheriting from the initial reference data and the initial set of templates. In order to support the extension of reference data and templates, an editor that provides creation, deletion and modification functions of user-defined reference data is needed. In this study, an editor for reference data based on ISO 15926 was developed. Sample reference data were encoded in OWL (web ontology language) according to the specification of ISO 15926 Part 8. iRINGTools and dot15926Editor were benchmarked for the design of GUI (graphical user interface). Reference data search, creation, modification, and deletion functions were implemented with XML (extensible markup language) DOM (document object model), and SPARQL (SPARQL protocol and RDF query language).

Ranking Candidate Genes for the Biomarker Development in a Cancer Diagnostics

  • Kim, In-Young;Lee, Sun-Ho;Rha, Sun-Young;Kim, Byung-Soo
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2004.11a
    • /
    • pp.272-278
    • /
    • 2004
  • Recently, Pepe et al. (2003) employed the receiver operating characteristic (ROC) approach to rank candidate genes from a microarray experiment that can be used for the biomarker development with the ultimate purpose of the population screening of a cancer, In the cancer microarray experiment based on n patients the researcher often wants to compare the tumor tissue with the normal tissue within the same individual using a common reference RNA. This design is referred to as a reference design or an indirect design. Ideally, this experiment produces n pairs of microarray data, where each pair consists of two sets of microarray data resulting from reference versus normal tissue and reference versus tumor tissue hybridizations. However, for certain individuals either normal tissue or tumor tissue is not large enough for the experimenter to extract enough RNA for conducting the microarray experiment, hence there are missing values either in the normal or tumor tissue data. Practically, we have $n_1$ pairs of complete observations, $n_2$ 'normal only' and $n_3$ 'tumor only' data for the microarray experiment with n patients, where n=$n_1$+$n_2$+$n_3$. We refer to this data set as a mixed data set, as it contains a mix of fully observed and partially observed pair data. This mixed data set was actually observed in the microarray experiment based on human tissues, where human tissues were obtained during the surgical operations of cancer patients. Pepe et al. (2003) provide the rationale of using ROC approach based on two independent samples for ranking candidate gene instead of using t or Mann -Whitney statistics. We first modify ROC approach of ranking genes to a paired data set and further extend it to a mixed data set by taking a weighted average of two ROC values obtained by the paired data set and two independent data sets.

  • PDF

Choline intake and its dietary reference values in Korea and other countries: a review

  • Shim, Eugene;Park, Eunju
    • Nutrition Research and Practice
    • /
    • v.16 no.sup1
    • /
    • pp.126-133
    • /
    • 2022
  • Choline is a water-soluble organic compound that is important for the normal functioning of the body. It is an essential dietary component as de novo synthesis by the human body is insufficient. Since the United States set the Adequate Intakes (AIs) for total choline as dietary reference values in 1998, Australia, China, and the European Union have also established the choline AIs. Although choline is clearly essential to life, the 2020 Dietary Reference Intakes for Koreans (KDRIs) has not established the values because very few studies have been done on choline intake in Koreans. Since choline intake levels differ by race and country, human studies on Koreans are essential to set KDRIs. Therefore, the present study was undertaken to provide basic data for developing choline KDRIs in the future by analyzing data on choline intake in Koreans to date and reference values of choline intake and dietary choline intake status by country and race.

PARAMETER IDENTIFICATION FOR NONLINEAR VISCOELASTIC ROD USING MINIMAL DATA

  • Kim, Shi-Nuk
    • Journal of applied mathematics & informatics
    • /
    • v.23 no.1_2
    • /
    • pp.461-470
    • /
    • 2007
  • Parameter identification is studied in viscoelastic rods by solving an inverse problem numerically. The material properties of the rod, which appear in the constitutive relations, are recovered by optimizing an objective function constructed from reference strain data. The resulting inverse algorithm consists of an optimization algorithm coupled with a corresponding direct algorithm that computes the strain fields given a set of material properties. Numerical results are presented for two model inverse problems; (i)the effect of noise in the reference strain fields (ii) the effect of minimal reference data in space and/or time data.

Selection of data set with fuzzy entropy function (퍼지 엔트로피 함수를 이용한 데이터추출)

  • Lee, Sang-Hyuk;Cheon, Seong-Pyo;Kim, Sung-Shin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.349-352
    • /
    • 2004
  • In this literature, the selection of data set among the universe set is carried out with the fuzzy entropy function. By the definition of fuzzy entropy, we have proposed the fuzzy entropy function and the proposed fuzzy entropy function is proved through the definition. The proposed fuzzy entropy function calculate the certainty or uncertainty value of data set, hence we can choose the data set that satisfying certain bound or reference. Therefore the reliable data set can be obtained by the proposed fuzzy entropy function. With the simple example we verify that the proposed fuzzy entropy function select reliable data set.

  • PDF

Selection of data set with fuzzy entropy function

  • Lee, Sang-Hyuk;Cheon, Seong-Pyo;Kim, Sung shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.655-659
    • /
    • 2004
  • In this literature, the selection of data set among the universe set is carried out with the fuzzy entropy function. By the definition of fuzzy entropy, the fuzzy entropy function is proposed and the proposed fuzzy entropy function is proved through the definition. The proposed fuzzy entropy function calculate the certainty or uncertainty value of data set, hence we can choose the data set that satisfying certain bound or reference. Therefore the reliable data set can be obtained by the proposed fuzzy entropy function. With the simple example we verify that the proposed fuzzy entropy function select reliable data set.

Extraction Method of Significant Clinical Tests Based on Data Discretization and Rough Set Approximation Techniques: Application to Differential Diagnosis of Cholecystitis and Cholelithiasis Diseases (데이터 이산화와 러프 근사화 기술에 기반한 중요 임상검사항목의 추출방법: 담낭 및 담석증 질환의 감별진단에의 응용)

  • Son, Chang-Sik;Kim, Min-Soo;Seo, Suk-Tae;Cho, Yun-Kyeong;Kim, Yoon-Nyun
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.2
    • /
    • pp.134-143
    • /
    • 2011
  • The selection of meaningful clinical tests and its reference values from a high-dimensional clinical data with imbalanced class distribution, one class is represented by a large number of examples while the other is represented by only a few, is an important issue for differential diagnosis between similar diseases, but difficult. For this purpose, this study introduces methods based on the concepts of both discernibility matrix and function in rough set theory (RST) with two discretization approaches, equal width and frequency discretization. Here these discretization approaches are used to define the reference values for clinical tests, and the discernibility matrix and function are used to extract a subset of significant clinical tests from the translated nominal attribute values. To show its applicability in the differential diagnosis problem, we have applied it to extract the significant clinical tests and its reference values between normal (N = 351) and abnormal group (N = 101) with either cholecystitis or cholelithiasis disease. In addition, we investigated not only the selected significant clinical tests and the variations of its reference values, but also the average predictive accuracies on four evaluation criteria, i.e., accuracy, sensitivity, specificity, and geometric mean, during l0-fold cross validation. From the experimental results, we confirmed that two discretization approaches based rough set approximation methods with relative frequency give better results than those with absolute frequency, in the evaluation criteria (i.e., average geometric mean). Thus it shows that the prediction model using relative frequency can be used effectively in classification and prediction problems of the clinical data with imbalanced class distribution.

A study on the standardization strategy for building of learning data set for machine learning applications (기계학습 활용을 위한 학습 데이터세트 구축 표준화 방안에 관한 연구)

  • Choi, JungYul
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.205-212
    • /
    • 2018
  • With the development of high performance CPU / GPU, artificial intelligence algorithms such as deep neural networks, and a large amount of data, machine learning has been extended to various applications. In particular, a large amount of data collected from the Internet of Things, social network services, web pages, and public data is accelerating the use of machine learning. Learning data sets for machine learning exist in various formats according to application fields and data types, and thus it is difficult to effectively process data and apply them to machine learning. Therefore, this paper studied a method for building a learning data set for machine learning in accordance with standardized procedures. This paper first analyzes the requirement of learning data set according to problem types and data types. Based on the analysis, this paper presents the reference model to build learning data set for machine learning applications. This paper presents the target standardization organization and a standard development strategy for building learning data set.