• Title/Summary/Keyword: language resources mapping

Search Result 8, Processing Time 0.026 seconds

Automatic Mapping Between Large-Scale Heterogeneous Language Resources for NLP Applications: A Case of Sejong Semantic Classes and KorLexNoun for Korean

  • Park, Heum;Yoon, Ae-Sun
    • Language and Information
    • /
    • v.15 no.2
    • /
    • pp.23-45
    • /
    • 2011
  • This paper proposes a statistical-based linguistic methodology for automatic mapping between large-scale heterogeneous languages resources for NLP applications in general. As a particular case, it treats automatic mapping between two large-scale heterogeneous Korean language resources: Sejong Semantic Classes (SJSC) in the Sejong Electronic Dictionary (SJD) and nouns in KorLex. KorLex is a large-scale Korean WordNet, but it lacks syntactic information. SJD contains refined semantic-syntactic information, with semantic labels depending on SJSC, but the list of its entry words is much smaller than that of KorLex. The goal of our study is to build a rich language resource by integrating useful information within SJD into KorLex. In this paper, we use both linguistic and statistical methods for constructing an automatic mapping methodology. The linguistic aspect of the methodology focuses on the following three linguistic clues: monosemy/polysemy of word forms, instances (example words), and semantically related words. The statistical aspect of the methodology uses the three statistical formulae ${\chi}^2$, Mutual Information and Information Gain to obtain candidate synsets. Compared with the performance of manual mapping, the automatic mapping based on our proposed statistical linguistic methods shows good performance rates in terms of correctness, specifically giving recall 0.838, precision 0.718, and F1 0.774.

  • PDF

Ontology Mapping and Rule-Based Inference for Learning Resource Integration

  • Jetinai, Kotchakorn;Arch-int, Ngamnij;Arch-int, Somjit
    • Journal of information and communication convergence engineering
    • /
    • v.14 no.2
    • /
    • pp.97-105
    • /
    • 2016
  • With the increasing demand for interoperability among existing learning resource systems in order to enable the sharing of learning resources, such resources need to be annotated with ontologies that use different metadata standards. These different ontologies must be reconciled through ontology mediation, so as to cope with information heterogeneity problems, such as semantic and structural conflicts. In this paper, we propose an ontology-mapping technique using Semantic Web Rule Language (SWRL) to generate semantic mapping rules that integrate learning resources from different systems and that cope with semantic and structural conflicts. Reasoning rules are defined to support a semantic search for heterogeneous learning resources, which are deduced by rule-based inference. Experimental results demonstrate that the proposed approach enables the integration of learning resources originating from multiple sources and helps users to search across heterogeneous learning resource systems.

Mapping Heterogenous Ontologies for the HLP Applications - Sejong Semantic Classes and KorLexNoun 1.5 - (인간언어공학에의 활용을 위한 이종 개념체계 간 사상 - 세종의미부류와 KorLexNoun 1.5 -)

  • Bae, Sun-Mee;Im, Kyoung-Up;Yoon, Ae-Sun
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.95-126
    • /
    • 2010
  • This study proposes a bottom-up and inductive manual mapping methodology for integrating two heterogenous fine-grained ontologies which were built by a top-down and deductive methodology, namely the Sejong semantic classes (SJSC) and the upper nodes in KorLexNoun 1.5 (KLN), for HLP applications. It also discusses various problematics in the mapping processes of two language resources caused by their heterogeneity and proposes the solutions. The mapping methodology of heterogeneous fine-grained ontologies uses terminal nodes of SJSC and Least Upper Bounds (LUB) of KLN as basic mapping units. Mapping procedures are as follows: first, the mapping candidate groups are decided by the lexfollocorrelation between the synsets of KLN and the noun senses of Sejong Noun Dfotionaeci(SJND) which are classified according to SJSC. Secondly, the meanings of the candidate groups are precisely disambiguated by linguistic information provided by the two ontologies, i.e. the hierarchicllostructures, the definitions, and the exae les. Thirdly, the level of LUB is determined by applying the appropriate predicates and definitions of SJSC to the upper-lower and sister nodes of the candidate LUB. Fourthly, the mapping possibility ic inthe terminal node of SJSC is judged by che aring hierarchicllorelations of the two ontologies. Finally, the ituorrect synsets of KLN and terminologiollocandidate groups are excluded in the mapping. This study positively uses various language information described in each ontology for establishing the mapping criteria, and it is indeed the advantage of the fine-grained manual mapping. The result using the proposed methodology shows that 6,487 LUBs are mapped with 474 terminal and non-terminal nodes of SJSC, excluding the multiple mapped nodes, and that 88,255 nodes of KLN are mapped including all lower-level nodes of the mapped LUBs. The total mapping coverage is 97.91% of KLN synsets. This result can be applied in many elaborate syntactic and semantic analyses for Korean language processing.

  • PDF

Information Strategy Planning for Digital Infrastructure Building with Geo-based Nonrenewable Resources Information in Korea: Conceptual Modeling Units

  • Chi, Kwang-Hoon;Yeon, Young-Kwang;Park, No-Wook;Lee, Ki-Won
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.191-196
    • /
    • 2002
  • From this year, KIGAM, one of Korean government-supported research institutes, has started new national program for digital geologic/natural resources infrastructure building. The goal of this program is to prepare digitally oriented infrastructure for practical digital database building, management, and public services of numerous types of paper maps related to geo-scientific resources or geologic thematic map sets: hydro-geologic map, applied geologic map, geo-chemical map, airborne radiometric/magnetic map, coal geologic map and off-shelf bathymetry map and so forth. As for digital infrastructure, several research issues in this topic are composed of: ISP (Information Strategy Planning), geo-framework modeling of each map set, pilot database building, cyber geo-mineral directory service system, and web based geologic information retrieval system upgrade which services Korean digital geologic maps scaled 1:50K. In this study, UML (Unified Modeling Language)-based data modeling of geo-data sets by and in KIGAM, among them, is mainly discussed, and its results are also presented in the viewpoint of digital geo-modeling ISP. It is expected this model is further progressed with the purpose of being a guidance or framework modeling for geologic thematic mapping and practical database building, as well as other types of national thematic map database building.

  • PDF

OWL Authoring System for building Web Ontology (웹 온톨로지 구축을 위한 OWL 저작 시스템)

  • Lee Moohun;Cho Hyunkyu;Cho Hyeonsung;Cho Sunghoon;Jang Changbok;Choi Euiin
    • The Journal of Society for e-Business Studies
    • /
    • v.10 no.3
    • /
    • pp.21-36
    • /
    • 2005
  • Current web search includes a lot of different results with information that user does not want, because it searches information using keyword mapping. Ontology can describe the correct meaning of web resource and relationships between web resources. And we can extract suitable information that user wants using Ontology Accordingly, we need the ontology to represent knowledge. W3C announced OWL(Web Ontology Language), meaning description technology for such web resource. However, the development of a special tool that can effectively compose and edit OWL is inactive. In this paper, we designed and developed an OWL authoring system that can effectively provide the generation and edit about OWL.

  • PDF

A Novel Framework for Defining and Submitting Workflows to Service-Oriented Systems

  • Bendoukha, Hayat;Slimani, Yahya;Benyettou, Abdelkader
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.365-383
    • /
    • 2014
  • Service-oriented computing offers efficient solutions for executing complex applications in an acceptable amount of time. These solutions provide important computing and storage resources, but they are too difficult for individual users to handle. In fact, Service-oriented architectures are usually sophisticated in terms of design, specifications, and deployment. On the other hand, workflow management systems provide frameworks that help users to manage cooperative and interdependent processes in a convivial manner. In this paper, we propose a workflow-based approach to fully take advantage of new service-oriented architectures that take the users' skills and the internal complexity of their applications into account. To get to this point, we defined a novel framework named JASMIN, which is responsible for managing service-oriented workflows on distributed systems. JASMIN has two main components: unified modeling language (UML) to specify workflow models and business process execution language (BPEL) to generate and compose Web services. In order to cover both workflow and service concepts, we describe in this paper a refinement of UML activity diagrams and present a set of rules for mapping UML activity diagrams into BPEL specifications.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Research in Applying Big Data and Artificial Intelligence on Defense Metadata using Multi Repository Meta-Data Management (MRMM) (국방 빅데이터/인공지능 활성화를 위한 다중메타데이터 저장소 관리시스템(MRMM) 기술 연구)

  • Shin, Philip Wootaek;Lee, Jinhee;Kim, Jeongwoo;Shin, Dongsun;Lee, Youngsang;Hwang, Seung Ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.169-178
    • /
    • 2020
  • The reductions of troops/human resources, and improvement in combat power have made Korean Department of Defense actively adapt 4th Industrial Revolution technology (Artificial Intelligence, Big Data). The defense information system has been developed in various ways according to the task and the uniqueness of each military. In order to take full advantage of the 4th Industrial Revolution technology, it is necessary to improve the closed defense datamanagement system.However, the establishment and usage of data standards in all information systems for the utilization of defense big data and artificial intelligence has limitations due to security issues, business characteristics of each military, anddifficulty in standardizing large-scale systems. Based on the interworking requirements of each system, data sharing is limited through direct linkage through interoperability agreement between systems. In order to implement smart defense using the 4th Industrial Revolution technology, it is urgent to prepare a system that can share defense data and make good use of it. To technically support the defense, it is critical to develop Multi Repository Meta-Data Management (MRMM) that supports systematic standard management of defense data that manages enterprise standard and standard mapping for each system and promotes data interoperability through linkage between standards which obeys the Defense Interoperability Management Development Guidelines. We introduced MRMM, and implemented by using vocabulary similarity using machine learning and statistical approach. Based on MRMM, We expect to simplify the standardization integration of all military databases using artificial intelligence and bigdata. This will lead to huge reduction of defense budget while increasing combat power for implementing smart defense.