• Title/Summary/Keyword: DBpedia

Search Result 30, Processing Time 0.023 seconds

Semantic Similarity-Based Contributable Task Identification for New Participating Developers

  • Kim, Jungil;Choi, Geunho;Lee, Eunjoo
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.4
    • /
    • pp.228-234
    • /
    • 2018
  • In software development, the quality of a product often depends on whether its developers can rapidly find and contribute to the proper tasks. Currently, the word data of projects to which newcomers have previously contributed are mainly utilized to find appropriate source files in an ongoing project. However, because of the vocabulary gap between software projects, the accuracy of source file identification based on information retrieval is not guaranteed. In this paper, we propose a novel source file identification method to reduce the vocabulary gap between software projects. The proposed method employs DBPedia Spotlight to identify proper source files based on semantic similarity between source files of software projects. In an experiment based on the Spring Framework project, we evaluate the accuracy of the proposed method in the identification of contributable source files. The experimental results show that the proposed approach can achieve better accuracy than the existing method based on comparison of word vocabularies.

A RBAC Model for Access Control in Linked Data Environments (연결 데이터 환경에서 접근제어를 위한 RBAC 모델)

  • Lee, Chonghyeon;Kim, Jangwon;Jeong, Dongwon;Baik, Doo-Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.181-184
    • /
    • 2010
  • 이 논문에서는 Linking Open Data 프로젝트를 기반으로 개발된 어플리케이션들의 접근제어를 위하여 기존 RBAC 모델을 연결 데이터에 적용 가능하도록 확장한 모델을 제안한다. 제안 모델은 온톨로지의 구조에 RBAC 모델에 적용할 수 있도록 RBAC 모델에 사용자를 위한 제약조건을 온톨로지로 표현하였으며, 지능형 엔진을 통해 사용자에게 적합한 권한을 추론한다. 사용자에 적합한 접근권한을 주기 위해 FOAF, flickr, 트위터 등의 데이터가 연결되어있는 연결 데이터로부터 사용자 프로파일을 확장할 수 있는 정보를 획득할 수 있으며, 이를 기존 정보에 확장하여 사용자의 권한을 부여한다. 본 논문에서 제안한 모델의 실효성을 검증하기 위하여 DBpedia Mobile을 위한 접근제어 시스템을 설계하였으며 안드로이드 SDK 환경에 프로토타입을 구현하여 제안 모델을 연결 데이터 환경의 어플리케이션에 적용 가능함을 보였다.

Relation Extraction Using Self-attention with Multi Grained Information (다중 정보와 Self-Attention을 이용한 관계 추출)

  • Kim, Jeong-Moo;Lee, Seung-Woo;Char, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.175-180
    • /
    • 2019
  • 관계 추출은 문서에서 존재하는 트리플(주어, 관계어, 목적어)형식에 해당하는 단어를 추출하는 작업을 뜻한다. 본 논문에서는 멀티헤드 셀프 어텐션을 이용하여 트리플 중 주어나 목적어를 찾는 구조를 제안한다. 한국어 위키피디아와 DBpedia의 관계어를 단어 임베딩을 통해 벡터를 생성하고 입력한다. 초록과 관계어의 어텐션 이후 멀티 헤드 셀프 어텐선 구조를 통해 초록 중 관계어와 관련 있는 단어들의 가중치가 높아 진다. 멀티헤드 셀프 어텐션 과정을 반복하여 주요 단어들의 가중치가 계속해서 높아진다. 이를 입력으로 하여 정답이 될 단어의 시작과 끝을 선택한다. 제안 방법으로 직접 구축한 한국어 관계 추출 데이터셋을 대상으로 F1 0.7981의 성능을 보였다. 제안 방법은 관계어와 같이 단순한 정보만을 이용하고도 초록에서 적절한 정답 단어를 추출할 수 있음을 확인하였다. 관계어의 범위를 확장함으로서 나아가 육하원칙(5W1H)과 같은 이벤트의 추출에도 활용할 수 있을 것이다.

  • PDF

Taxonomy Induction from Wikidata using Directed Acyclic Graph's Centrality (방향 비순환 그래프의 중심성을 이용한 위키데이터 기반 분류체계 구축)

  • Cheon, Hee-Seon;Kim, Hyun-Ho;Kang, Inho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.582-587
    • /
    • 2021
  • 한국어 통합 지식베이스를 생성하기 위해 필수적인 분류체계(taxonomy)를 구축하는 방식을 제안한다. 위키데이터를 기반으로 분류 후보군을 추출하고, 상하위 관계를 통해 방향 비순환 그래프(Directed Acyclic Graph)를 구성한 뒤, 국부적 도달 중심성(local reaching centrality) 등의 정보를 활용하여 정제함으로써 246 개의 분류와 314 개의 상하위 관계를 갖는 분류체계를 생성한다. 워드넷(WordNet), 디비피디아(DBpedia) 등 기존 링크드 오픈 데이터의 분류체계 대비 깊이 있는 계층 구조를 나타내며, 다중 상위 분류를 지닐 수 있는 비트리(non-tree) 구조를 지닌다. 또한, 위키데이터 속성에 기반하여 위키데이터 정보가 있는 인스턴스(instance)에 자동으로 분류를 부여할 수 있으며, 해당 방식으로 실험한 결과 99.83%의 분류 할당 커버리지(coverage) 및 99.81%의 분류 예측 정확도(accuracy)를 나타냈다.

  • PDF

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

A Non-morphological Approach for DBpedia URI Spotting within Korean Text (한국어 텍스트의 개체 URI 탐지: 품사 태깅 독립적 개체명 인식과 중의성 해소)

  • Kim, Youngsik;Hahm, Younggyun;Kim, Jiseong;Hwang, Dosam;Choi, Key-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2014.10a
    • /
    • pp.100-106
    • /
    • 2014
  • URI spotting (탐지) 문제는 텍스트에 있는 단어열 중에서 URI로 대표되는 개체(entity)에 해당되는 것을 탐지하는 것이다. 이 문제는 두 개의 작은 문제를 순차적으로 해결하는 과제이다. 즉, 첫째는 어느 단어열이 URI에 해당하는 개체인가를 인식하는 것이고, 둘째는 개체 중의성 해소 문제로서 파악된 개체가 복수의 URI에 해당할 수 있는 의미적 모호성이 있을 때 그 URI중 하나를 선택하여 모호성을 해소하는 것이다. 이 논문은 디비피디아 URI를 대상으로 한다. URI 탐지 문제는 개체명 인식 문제와 비슷하나, URI(예를 들어 디비피디아 URI, 즉 Wikipedia 등재어)에 매핑될 수 있는 개체로 한정되므로 일반적인 개체명 인식 문제에서 단어열의 품사열이 기계학습의 자질로 들어가는 방법론과는 다른 자질을 사용할 수 있다. 이 논문에서는 한국어 텍스트를 대상으로 한국어 디비피디아 URI 탐지문제로서 SVM을 이용한 개체경계 인식 방법을 제시하여, 일반적 개체명 인식에서 나타나는 품사태거의 오류파급효과를 없애고자 한다. 또한 개체중의성 해소 문제는 의미모호성이 주변 문장들의 토픽에 따라 달라지므로, LDA를 활용하며 이를 영어 디비피디아 URI탐지에서 쓰인 방법들과 비교한다.

  • PDF

Extending Bibliographic Information Using Linked Data (링크드 데이터 방식을 통한 서지 정보의 확장에 관한 연구)

  • Park, Zi-Young
    • Journal of the Korean Society for information Management
    • /
    • v.29 no.1
    • /
    • pp.231-251
    • /
    • 2012
  • In this study, Linked Data was used for extending bibliographic data, because Linked Data provides shareable identifiers, data structures, and link information. Linked Data is especially efficient in expanding bibliographic data integrated with bibliographic ontology. Therefore, Linked Data and bibliographic ontologies were analyzed and available Linked Data was suggested. By linking between meta-data schemes, bibliographic data, and authority data, issues for the effective Linked Data sharing were suggested: 1) selecting proper Linked Data for each bibliographic organization, 2) linking between different Linked Data, and 3) developing their own Linked Data for each bibliographic organization.

Study on Development of Journal and Article Visualization Services (학술정보 시각화 서비스 개발에 관한 연구)

  • Cho, Sung-Nam;Seo, Tae-Sul
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.50 no.2
    • /
    • pp.183-196
    • /
    • 2016
  • The academic journal is an important medium carrying newly discovered knowledge in various disciplines. It is desirable to consider visualization of journal and article information in order to make the information more insightful and effective than text-based information. In this study, visualization service platform of journal and article information is developed. TagCloud were included in both Infographics of journal and article. Each word in the TagCloud is inter-linked with DBPedia using Linked Open Data (LOD) technique.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Automatic Target Recognition Study using Knowledge Graph and Deep Learning Models for Text and Image data (지식 그래프와 딥러닝 모델 기반 텍스트와 이미지 데이터를 활용한 자동 표적 인식 방법 연구)

  • Kim, Jongmo;Lee, Jeongbin;Jeon, Hocheol;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.145-154
    • /
    • 2022
  • Automatic Target Recognition (ATR) technology is emerging as a core technology of Future Combat Systems (FCS). Conventional ATR is performed based on IMINT (image information) collected from the SAR sensor, and various image-based deep learning models are used. However, with the development of IT and sensing technology, even though data/information related to ATR is expanding to HUMINT (human information) and SIGINT (signal information), ATR still contains image oriented IMINT data only is being used. In complex and diversified battlefield situations, it is difficult to guarantee high-level ATR accuracy and generalization performance with image data alone. Therefore, we propose a knowledge graph-based ATR method that can utilize image and text data simultaneously in this paper. The main idea of the knowledge graph and deep model-based ATR method is to convert the ATR image and text into graphs according to the characteristics of each data, align it to the knowledge graph, and connect the heterogeneous ATR data through the knowledge graph. In order to convert the ATR image into a graph, an object-tag graph consisting of object tags as nodes is generated from the image by using the pre-trained image object recognition model and the vocabulary of the knowledge graph. On the other hand, the ATR text uses the pre-trained language model, TF-IDF, co-occurrence word graph, and the vocabulary of knowledge graph to generate a word graph composed of nodes with key vocabulary for the ATR. The generated two types of graphs are connected to the knowledge graph using the entity alignment model for improvement of the ATR performance from images and texts. To prove the superiority of the proposed method, 227 documents from web documents and 61,714 RDF triples from dbpedia were collected, and comparison experiments were performed on precision, recall, and f1-score in a perspective of the entity alignment..