• Title/Summary/Keyword: semantic mapping

Search Result 129, Processing Time 0.025 seconds

Generating Ontology Classes and Hierarchical Relationships from Relational Database View Definitions (관계형 데이터베이스 뷰 정의로부터 온톨로지 클래스와 계층 관계 생성 기법)

  • Yang, Jun-Seok;Kim, Ki-Sung;Kim, Hyoung-Joo
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.333-342
    • /
    • 2010
  • Building ontology is the key factor to construct semantic web. However, this is time-consuming process. Hence, there are several approaches which automatically generate the ontologies from relational databases. Current studies on the automatic generation of the ontologies from relational database are focused on generating the ontology by analyzing the database schema and stored data. These studies generate the ontology by analyzing only tables and constraints in the schema and ignore view definitions. However, view definitions are defined by a database designer considering the domain of the database. Hence, by considering view definitions, additional classes and hierarchical relationships can be generated. And these are useful in answering queries and integration of ontologies. In this paper, we formalize the generation of classes and hierarchical relationships by analyzing existing methods, and we propose the method which generates additional classes and hierarchical relationships by analyzing view definitions. Finally, we analyze the generated ontology by applying our method to synthetic data and real-world data. We show that our method generates meaningful classes and hierarchical relationships using view definitions.

A Study of Dynamic Web Ontology for Comparison-shopping Agent based on Semantic Web (시멘틱 웹 기반의 비교구매 에이전트를 위한 동적 웹 온톨로지에 대한 연구)

  • Kim, Su-Kyoung;Ahn, Ki-Hong
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.2
    • /
    • pp.31-45
    • /
    • 2005
  • In this paper, convert in RDF triple and a RDF document through RDF document converters and design metadata schema about a digital camcorder after use Wrapper technology, and acquiring commodity information of a HTML page about the digital camcorder which these papers are defined so as to be different by electronic commerce stores, and is expressed. Save in digital camcorder domain ontology storage that implemented to relational database to DCC knowledge base ontology as convert to OWL Web ontology based on designed metadata schema. Through compare with rdf and DCCKBO, mapping, and inference process, provide to buyers by DCC information of the store that had the commodity purchasing information which is the best, and proposed a dynamic Web ontology guessed to contents of the best commodity purchasing information, and to define domain ontology saved in DCCKBO.

  • PDF

Small Sample Face Recognition Algorithm Based on Novel Siamese Network

  • Zhang, Jianming;Jin, Xiaokang;Liu, Yukai;Sangaiah, Arun Kumar;Wang, Jin
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1464-1479
    • /
    • 2018
  • In face recognition, sometimes the number of available training samples for single category is insufficient. Therefore, the performances of models trained by convolutional neural network are not ideal. The small sample face recognition algorithm based on novel Siamese network is proposed in this paper, which doesn't need rich samples for training. The algorithm designs and realizes a new Siamese network model, SiameseFacel, which uses pairs of face images as inputs and maps them to target space so that the $L_2$ norm distance in target space can represent the semantic distance in input space. The mapping is represented by the neural network in supervised learning. Moreover, a more lightweight Siamese network model, SiameseFace2, is designed to reduce the network parameters without losing accuracy. We also present a new method to generate training data and expand the number of training samples for single category in AR and labeled faces in the wild (LFW) datasets, which improves the recognition accuracy of the models. Four loss functions are adopted to carry out experiments on AR and LFW datasets. The results show that the contrastive loss function combined with new Siamese network model in this paper can effectively improve the accuracy of face recognition.

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.

Generating Radiology Reports via Multi-feature Optimization Transformer

  • Rui Wang;Rong Hua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2768-2787
    • /
    • 2023
  • As an important research direction of the application of computer science in the medical field, the automatic generation technology of radiology report has attracted wide attention in the academic community. Because the proportion of normal regions in radiology images is much larger than that of abnormal regions, words describing diseases are often masked by other words, resulting in significant feature loss during the calculation process, which affects the quality of generated reports. In addition, the huge difference between visual features and semantic features causes traditional multi-modal fusion method to fail to generate long narrative structures consisting of multiple sentences, which are required for medical reports. To address these challenges, we propose a multi-feature optimization Transformer (MFOT) for generating radiology reports. In detail, a multi-dimensional mapping attention (MDMA) module is designed to encode the visual grid features from different dimensions to reduce the loss of primary features in the encoding process; a feature pre-fusion (FP) module is constructed to enhance the interaction ability between multi-modal features, so as to generate a reasonably structured radiology report; a detail enhanced attention (DEA) module is proposed to enhance the extraction and utilization of key features and reduce the loss of key features. In conclusion, we evaluate the performance of our proposed model against prevailing mainstream models by utilizing widely-recognized radiology report datasets, namely IU X-Ray and MIMIC-CXR. The experimental outcomes demonstrate that our model achieves SOTA performance on both datasets, compared with the base model, the average improvement of six key indicators is 19.9% and 18.0% respectively. These findings substantiate the efficacy of our model in the domain of automated radiology report generation.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.

Ontology-based Course Mentoring System (온톨로지 기반의 수강지도 시스템)

  • Oh, Kyeong-Jin;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.149-162
    • /
    • 2014
  • Course guidance is a mentoring process which is performed before students register for coming classes. The course guidance plays a very important role to students in checking degree audits of students and mentoring classes which will be taken in coming semester. Also, it is intimately involved with a graduation assessment or a completion of ABEEK certification. Currently, course guidance is manually performed by some advisers at most of universities in Korea because they have no electronic systems for the course guidance. By the lack of the systems, the advisers should analyze each degree audit of students and curriculum information of their own departments. This process often causes the human error during the course guidance process due to the complexity of the process. The electronic system thus is essential to avoid the human error for the course guidance. If the relation data model-based system is applied to the mentoring process, then the problems in manual way can be solved. However, the relational data model-based systems have some limitations. Curriculums of a department and certification systems can be changed depending on a new policy of a university or surrounding environments. If the curriculums and the systems are changed, a scheme of the existing system should be changed in accordance with the variations. It is also not sufficient to provide semantic search due to the difficulty of extracting semantic relationships between subjects. In this paper, we model a course mentoring ontology based on the analysis of a curriculum of computer science department, a structure of degree audit, and ABEEK certification. Ontology-based course guidance system is also proposed to overcome the limitation of the existing methods and to provide the effectiveness of course mentoring process for both of advisors and students. In the proposed system, all data of the system consists of ontology instances. To create ontology instances, ontology population module is developed by using JENA framework which is for building semantic web and linked data applications. In the ontology population module, the mapping rules to connect parts of degree audit to certain parts of course mentoring ontology are designed. All ontology instances are generated based on degree audits of students who participate in course mentoring test. The generated instances are saved to JENA TDB as a triple repository after an inference process using JENA inference engine. A user interface for course guidance is implemented by using Java and JENA framework. Once a advisor or a student input student's information such as student name and student number at an information request form in user interface, the proposed system provides mentoring results based on a degree audit of current student and rules to check scores for each part of a curriculum such as special cultural subject, major subject, and MSC subject containing math and basic science. Recall and precision are used to evaluate the performance of the proposed system. The recall is used to check that the proposed system retrieves all relevant subjects. The precision is used to check whether the retrieved subjects are relevant to the mentoring results. An officer of computer science department attends the verification on the results derived from the proposed system. Experimental results using real data of the participating students show that the proposed course guidance system based on course mentoring ontology provides correct course mentoring results to students at all times. Advisors can also reduce their time cost to analyze a degree audit of corresponding student and to calculate each score for the each part. As a result, the proposed system based on ontology techniques solves the difficulty of mentoring methods in manual way and the proposed system derive correct mentoring results as human conduct.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.

Cascade Composition of Translation Rules for the Ontology Interoperability of Simple RDF Message (단순 RDF 메시지의 온톨로지 상호 운용성을 위한 변환 규칙들의 연쇄 조합)

  • Kim, Jae-Hoon;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.34 no.6
    • /
    • pp.528-545
    • /
    • 2007
  • Recently ontology has been an attractive technology along with the business strategy of providing a plenty of more intelligent services. The essential problem in application domains using ontology is that all members, agents, and application programs in the domains must share the same ontology concepts. However, a variety of mobile devices, sensing devices, and network components manufactured by various companies, a variety of common carriers, and a variety of contents providers make multiple heterogeneous ontologies more likely to coexist. We can see many past researches fallen into resolving this semantic interoperability. Such methods can be broadly classified into by-mapping, by-merging, and by-translation. In this research, we focus on by-translation among them which uses a translation rule directly made between two heterogeneous ontology data like OntoMorph. However, the manual composition of the direct translation rule is not convenient by itself and if there are N ontologies, the direct method has the rule composition complexity of $O(N^2)$ in the worst case. Therefore, in this paper we introduce the cascade composition of translation rules based on web openness in order to improve the complexity. The research result made us recognize some important factors in an ontology translation system, that is speediness of translation, and conveniency of translation rule composition, and some experiments and comparing analysis with existing methods showed that our cascade method has more conveniency with insuring the speediness and the correctness.

Study Service Ontology Design Scheme Using UML and OCL (UML 및 OCL을 이용한 서비스 온톨로지 설계 방안에 관한 연구)

  • Lee Yun-Su;Chung In-Jeoung
    • The KIPS Transactions:PartD
    • /
    • v.12D no.4 s.100
    • /
    • pp.627-636
    • /
    • 2005
  • The Intelligent Web Service is proposed for the purpose of automatic discovery, invocation, composition, inter-operation, execution monitoring and recovery web service through the Semantic Web and the Agent technology. To accomplish this Intelligent Web Service, the Ontology is a necessity for reasoning and processing the knowledge by the computer. However, creating service ontology, for the intelligent web service, has two problems not only consuming a lot of time and cost depended on heuristic of service developer, but also being hard to be mapping completely between service and service ontology. Moreover, the markup language to describe the service ontology is currently hard to be learned by the service developer In a short time. This paper proposes the efficient way of designing and creating the service ontology using MDA methodology. This proposed solution reuses the creating model in terms of desiEninE and constructing Web Service Model using UML based on MDA. After converting the Platform-Independent Web Service Model to the dependent model of OWL-S which is a Service Ontology description language, it converts to OWL-S Service Ontology using XMI. This proposed solution has profits, oneis able to be easily constructed the Service Ontology by Service Developers, the other is enable to be created the both service and Service Ontology from one model. Moreover, it can be effective to reduce the time and cost as creating Service Ontology automatically from a model, and calmly dealt with a change of outer environment like as the platform change. This paper cites an instance for the validity of designing Web Service model and creating the Service Ontology, and validates whether the created Service Ontology is valid or not.