• Title/Summary/Keyword: structured document

Search Result 170, Processing Time 0.029 seconds

A Document-Driven Method for Certifying Scientific Computing Software for Use in Nuclear Safety Analysis

  • Smith, W. Spencer;Koothoor, Nirmitha
    • Nuclear Engineering and Technology
    • /
    • v.48 no.2
    • /
    • pp.404-418
    • /
    • 2016
  • This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuelpin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification.

Annotation Anchoring Methods in Structured Document Environments (구조문서 환경에서 Annotation의 앵커링 기법)

  • 손원성;김재경;최윤철;임순범
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.05b
    • /
    • pp.476-479
    • /
    • 2003
  • 전자문서 환경에서의 annotation은 그 특성상 원본문서의 내용이 변경될 경우 annotation의 대상인 앵커를 더 이상 참조할 수 없게 된다. 따라서 annotation 시스템에서는 반드시 원본문서 변경에 대한 앵커링 기능을 필요로 한다. 그러나 기존 연구에서는 앵커 텍스트의 변경을 고려하지 않거나 일반 텍스트 문서만을 대상으로 한다. 본 논문에서는 XML과 같은 구조문서 환경에서의 annotation 앵커링 기법을 제안한다. 제안된 기법에서는 XML 환경에서 앵커 텍스트 및 path정보에 대한 단계별 앵커링 과정을 수행한다. 또한 본 논문에서는 제안된 기법에 근거한 사용자 인터페이스를 제공한다. 그 결과 제안된 기법 및 시스템에서는 구조문서 환경에서 기존 연구 보다 심도 있는 앵커링을 보장하며 동시에 IETM, cyber-class, eLearing, semantic web 등의 다양한 분야에 효과적으로 적용 가능하다.

  • PDF

DEVELOPMENT OF LEGALITY SYSTEM FOR BUILDING ADMINISTRATION PERMISSION SERVICE BASED ON BIM

  • Inhan Kim;Jungsik Choi
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.593-600
    • /
    • 2009
  • In Korea, the government has developed SEUMTER, an administration system for building related public service, to facilitate and promote the electronic submission and permission activities. SEUMTER is progressing legality system based on 2D drawing for building administration permission service. However, there are a lot of problems related to legality system owing to complexity of Korea regulation relation and structure, inefficiency of legality system based on 2D drawing, duplication examination of document (soliciting forms for civil affairs) and drawing. Therefore, the purpose of this study is to develop legality system for building administration permission service based on BIM in Korea. To achieve this purpose, the authors have investigated permission procedure and regulation structure that is used in current building administration permission and suggested permission procedure and regulation structure for legality system based on BIM. In addition, the authors have investigated element technologies (for examples, method of structured regulation, BIM model checker, Viewer, etc) for legality system based on BIM. Finally, the authors have suggested strategy and hereafter direction for application of legality system based on BIM.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Linear Path Query Processing using Backward Label Path on XML Documents (역방향 레이블 경로를 이용한 XML 문서의 선형 경로 질의 처리)

  • Park, Chung-Hee;Koo, Heung-Seo;Lee, Sang-Joon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.6
    • /
    • pp.766-772
    • /
    • 2007
  • As XML is widely used, many researches on the XML storage and query processing have been done. But, previous works on path query processing have mainly focused on the storage and retrieval methods for a large XML document or XML documents had a same DTD. Those researches did not efficiently process partial match queries on the differently-structured document set. To resolve the problem, we suggested a new index structure using relational table. The method constructs the $B^+$-tree index using backward label paths instead of forward label paths used in previous researches for storing path information and allows for finding the label paths that match the partial match queries efficiently using it when process the queries.

Wrapper-based Economy Data Collection System Design And Implementation (래퍼 기반 경제 데이터 수집 시스템 설계 및 구현)

  • Piao, Zhegao;Gu, Yeong Hyeon;Yoo, Seong Joon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.227-230
    • /
    • 2015
  • For analyzing and prediction of economic trends, it is necessary to collect particular economic news and stock data. Typical Web crawler to analyze the page content, collects document and extracts URL automatically. On the other hand there are forms of crawler that can collect only document of a particular topic. In order to collect economic news on a particular Web site, we need to design a crawler which could directly analyze its structure and gather data from it. The wrapper-based web crawler design is required. In this paper, we design a crawler wrapper for Economic news analysis system based on big data and implemented to collect data. we collect the data which stock data, sales data from USA auto market since 2000 with wrapper-based crawler. USA and South Korea's economic news data are also collected by wrapper-based crawler. To determining the data update frequency on the site. And periodically updated. We remove duplicate data and build a structured data set for next analysis. Primary to remove the noise data, such as advertising and public relations, etc.

  • PDF

Requirements Engineering for Digitizing Traditional Medical Knowledge: The Case of Building Phytomedicine Mobile-Web Application in Tanzania

  • Beebwa, Irene Evarist;Dida, Mussa Ally;Chacha, Musa;Nyakundi, David Onchonga;Marwa, Janeth
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.9 no.4
    • /
    • pp.95-114
    • /
    • 2019
  • The digitization of traditional medical knowledge in Tanzania will greatly enhance its preservation and dissemination. This is especially important given the challenges facing the current methods of preserving and managing such knowledge. This study presents the requirements engineering approaches and requirements for a web-mobile application that would successfully digitize indigenous knowledge of phytomedicine and relevant practitioners licensing and registration processes. To establish the requirements of such a digital system application, the study sought the opinion of 224 stakeholders whose suggestions were used to analyze and model the requirements for designing such a web-mobile tool. The study was carried out in Arusha, Kagera and Dar es Salaam regions of Tanzania which involved ethnobotanical researchers, herb practitioners, curators from herbaria and registrar officers from Traditional and Alternatives Health Practice Council. Structured interview, survey, observation and document review were employed to find out the basic functional and non-functional requirements for possible designing and implementation a web-mobile application that would digitize indigenous knowledge of medicinal plants. The requirements were modelled using the use case and context diagrams. Finally, the study came up with a list of items for both functional and non-functional requirements that can be used as guidelines to develop a web-mobile application that will capture and document traditional medical knowledge of medicinal plants in Tanzania and, enabling relevant authorities to regulate and manage stakeholders.

A Study on Ontology-based Keywords Structuring for Efficient Information Retrieval (연구.학술정보 효율적 검색을 위한 온톨로지 기반의 주제 색인어 구조화 방안 연구)

  • Song, In-Seok
    • Journal of Information Management
    • /
    • v.39 no.4
    • /
    • pp.121-154
    • /
    • 2008
  • In this paper, a ontology-based keyword structuring method is proposed to represent the knowledge structure of scholarly documents and to make inferences from the semantic relationships holding among them. The characteristics of thesaurus as a knowledge organization system(KOS) for subject heading is critically reviewed from the information retrieval point of view. The domain concepts are identified and classified by analysis of the information activities occurring in a general research process based on scholarly sensemaking model. The ontological structure of keyword set is defined in terms of the semantic relationship of the canonical concepts which constitute scholarly documents such as journal articles. As a result, each ontologically structured keyword set of a document represents the knowledge structure of the corresponding document as semantic index. By means of the axioms and inference rules defined for information needs, users can efficiently explore the scholarly communication network built on the semantic relationship among documents in an analytic way based on the scholarly sensemaking model in oder to efficiently retrieve the relevant information for problem solving.

Design and Application of XTML Script Language based on XML (XML을 이용한 스크립트 언어 XTML 의 설계 및 응용)

  • Jeong, Byeong-Hui;Park, Jin-U;Lee, Su-Yeon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.6
    • /
    • pp.816-833
    • /
    • 1999
  • 스타일 정보를 중심으로 하는 기존의 워드 프로세서의 출력 문서들을 차세대 인터넷 문서인 XML문서방식에 따라서 표기하고 또한 제목, 초록, 장 및 단락 등과 같은 논리적인 구조를 반영할 수 있도록 구조화함으로써 문서들의 상호교환뿐만 아니라 인터넷에서 유효하게 사용할 수가 있다. 본 논문에서는 스타일 또는 표현 속성 중심으로 하는 다양한 문서의 평면 구조를 XML의 계층적인 논리적인 구조로, 또한 다양한 DTD(Document Type Definition)환경하에서 변경시킬 수가 있는 변환 스크립트 언어를 표현할 수 있도록 하기 위하여 XTML(XML Transformation Markup Language)을 DTD형식으로 정의하고 이를 이용하여 변환 스크립트를 작성하였으며 자동태깅에 적용하여 보았다.XTML은 그 인스턴스에 해당하는 변환 알고리즘의 효과적인 수행을 위하여 즉 기존의 XML문서를 효과적으로 다루기 위하여 문서를 GROVE라는 트리 구조로 만들어 저장하고 또한 이를 조작할 수 있는 기능 및 다양한 명령어 인터페이스를 제공하였다. Abstract Output documents of existing word processors based on style informations or presentation attributes can be structured by converting them into XML(Extensible Markup Language) documents based on hierarchically logical structures such as title, abstract, chapter and so on. If so, it can be very useful to interchange and manipulate documents under Internet environment. The conversion need the complicate process calling auto-tagging by which elements of output documents can be inferred from style informations and sequences of text etc, and which is different from various kinds of simple conversion.In this paper, we defined XTML(XML Transformation Markup Language) of DTD(Document Type Definition) form and also defined the script language as instances of its DTD for the auto-tagging. XTML and its DTD are represented in XML syntax.Especially XTML includes various functions and commands to generate tree structure named as "GROVE" and also to process, store and manipulate the GROVE in order to process efficiently XML documents.documents.

A Peer-support Mini-counseling Model to Improve Treatment in HIV-positive Pregnant Women in Kupang City, East Nusa Tenggara, Indonesia

  • Artha Camellia;Plamularsih Swandari;Gusni Rahma;Tuti Parwati Merati;I Made Bakta;Dyah Pradnyaparamita Duarsa
    • Journal of Preventive Medicine and Public Health
    • /
    • v.56 no.3
    • /
    • pp.238-247
    • /
    • 2023
  • Objectives: Low adherence to antiretroviral (ARV) therapy in pregnant women with human immunodeficiency virus (HIV) increases the risk of virus transmission from mother to newborn. Increasing mothers' knowledge and motivation to access treatment has been identified as a critical factor in prevention. Therefore, this research aimed to explore barriers and enablers in accessing HIV care and treatment services. Methods: This research was the first phase of a mixed-method analysis conducted in Kupang, a remote city in East Nusa Tenggara Province, Indonesia. Samples were taken by purposive sampling of 17 people interviewed, consisting of 6 mothers with HIV, 5 peer facilitators, and 6 health workers. Data were collected through semi-structured interviews, focus group discussions, observations, and document review. Inductive thematic analysis was also performed. The existing data were grouped into several themes, then relationships and linkages were drawn from each group of informants. Results: Barriers to accessing care and treatment were lack of knowledge about the benefits of ARV; stigma from within and the surrounding environment; difficulty in accessing services due to distance, time, and cost; completeness of administration; drugs' side effects; and the quality of health workers and HIV services. Conclusions: There was a need for a structured and integrated model of peer support to improve ARV uptake and treatment in pregnant women with HIV. This research identified needs including mini-counseling sessions designed to address psychosocial barriers as an integrated approach to support antenatal care that can effectively assist HIV-positive pregnant women in improving treatment adherence.