• Title/Summary/Keyword: Knowledge extraction

Search Result 384, Processing Time 0.03 seconds

Object of Interest Extraction Using Gabor Filters (가버 필터에 기반한 관심 객체 검출)

  • Kim, Sung-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.87-94
    • /
    • 2008
  • In this paper, an extraction method of objects of interest in the color images is proposed. It is possible to extract objects of interest from a complex background without any prior-knowledge based on the proposed method. For object extraction, Gator images that contain information of object location, are created by using Gator filter. Based on the images the initial location of attention windows is determined, from which image features are selected to extract objects. To extract object, I modify the previous method partially and apply the modified method. To evaluate the performance of propsed method, precision, recall and F-measure are calculated between the extraction results from propsed method and manually extracted results. I verify the performance of the proposed methods based on these accuracies. Also through comparison of the results with the existing method, I verily the superiority of the proposed method over the existing method.

  • PDF

Improving Embedding Model for Triple Knowledge Graph Using Neighborliness Vector (인접성 벡터를 이용한 트리플 지식 그래프의 임베딩 모델 개선)

  • Cho, Sae-rom;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.67-80
    • /
    • 2021
  • The node embedding technique for learning graph representation plays an important role in obtaining good quality results in graph mining. Until now, representative node embedding techniques have been studied for homogeneous graphs, and thus it is difficult to learn knowledge graphs with unique meanings for each edge. To resolve this problem, the conventional Triple2Vec technique builds an embedding model by learning a triple graph having a node pair and an edge of the knowledge graph as one node. However, the Triple2 Vec embedding model has limitations in improving performance because it calculates the relationship between triple nodes as a simple measure. Therefore, this paper proposes a feature extraction technique based on a graph convolutional neural network to improve the Triple2Vec embedding model. The proposed method extracts the neighborliness vector of the triple graph and learns the relationship between neighboring nodes for each node in the triple graph. We proves that the embedding model applying the proposed method is superior to the existing Triple2Vec model through category classification experiments using DBLP, DBpedia, and IMDB datasets.

Common risk factors for postoperative pain following the extraction of wisdom teeth

  • Rakhshan, Vahid
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.41 no.2
    • /
    • pp.59-65
    • /
    • 2015
  • The extraction of third molars is a common task carried out at dental/surgery clinics. Postoperative pain is one of the two most common complications of this surgery, along with dry socket. Knowledge of the frequent risk factors of this complication is useful in determining high-risk patients, planning treatment, and preparing the patients mentally. Since the risk factors for postoperative pain have never been summarized before while the risk factors for dry socket have been highly debated, this report summarizes the literature regarding the common predictors of postextraction pain. Except for surgical difficulty and the surgeon's experience, the influences of other risk factors (age, gender and oral contraceptive use) were rather inconclusive. The case of a female gender or oral contraceptive effect might mainly be associated with estrogen levels (when it comes to dry socket), which can differ considerably from case to case. Improvement in and unification of statistical and diagnostic methods seem necessary. In addition, each risk factor was actually a combination of various independent variables, which should instead be targeted in more comprehensive studies.

Efficient Extraction of Hierarchically Structured Rules Using Rough Sets

  • Lee, Chul-Heui;Seo, Seon-Hak
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.205-210
    • /
    • 2004
  • This paper deals with rule extraction from data using rough set theory. We construct the rule base in a hierarchical granulation structure by applying core as a classification criteria at each level. When more than one core exist, the coverage is used for the selection of an appropriate one among them to increase the classification rate and accuracy. In Addition, a probabilistic approach is suggested so that the partially useful information included in inconsistent data can be contributed to knowledge reduction in order to decrease the effect of the uncertainty or vagueness of data. As a result, the proposed method yields more proper and efficient rule base in compatability and size. The simulation result shows that it gives a good performance in spite of very simple rules and short conditionals.

A Study on the Self-Evolving Expert System using Neural Network and Fuzzy Rule Extraction (인공신경망과 퍼지규칙 추출을 이용한 상황적응적 전문가시스템 구축에 관한 연구)

  • 이건창;김진성
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.231-240
    • /
    • 2001
  • Conventional expert systems has been criticized due to its lack of capability to adapt to the changing decision-making environments. In literature, many methods have been proposed to make expert systems more environment-adaptive by incorporating fuzzy logic and neural networks. The objective of this paper is to propose a new approach to building a self-evolving expert system inference mechanism by integrating fuzzy neural network and fuzzy rule extraction technique. The main recipe of our proposed approach is to fuzzify the training data, train them by a fuzzy neural network, extract a set of fuzzy rules from the trained network, organize a knowledge base, and refine the fuzzy rules by applying a pruning algorithm when the decision-making environments are detected to be changed significantly. To prove the validity, we tested our proposed self-evolving expert systems inference mechanism by using the bankruptcy data, and compared its results with the conventional neural network. Non-parametric statistical analysis of the experimental results showed that our proposed approach is valid significantly.

  • PDF

A Method for Caption Segmentation using Minimum Spanning Tree

  • Chun, Byung-Tae;Kim, Kyuheon;Lee, Jae-Yeon
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.906-909
    • /
    • 2000
  • Conventional caption extraction methods use the difference between frames or color segmentation methods from the whole image. Because these methods depend heavily on heuristics, we should have a priori knowledge of the captions to be extracted. Also they are difficult to implement. In this paper, we propose a method that uses little heuristics and simplified algorithm. We use topographical features of characters to extract the character points and use KMST(Kruskal minimum spanning tree) to extract the candidate regions for captions. Character regions are determined by testing several conditions and verifying those candidate regions. Experimental results show that the candidate region extraction rate is 100%, and the character region extraction rate is 98.2%. And then we can see the results that caption area in complex images is well extracted.

  • PDF

Road Extraction Based on Watershed Segmentation for High Resolution Satellite Images

  • Chang, Li-Yu;Chen, Chi-Farn
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.525-527
    • /
    • 2003
  • Recently, the spatial resolution of earth observation satellites is significantly increased to a few meters. Such high spatial resolution images definitely will provide lots of information for detail-thirsty remote sensing users. However, it is more difficult to develop automated image algorithms for automated image feature extraction and pattern recognition. In this study, we propose a two-stage procedure to extract road information from high resolution satellite images. At first stage, a watershed segmentation technique is developed to classify the image into various regions. Then, a knowledge is built for road and used to extract the road regions. In this study, we use panchromatic and multi-spectral images of the IKONOS satellite as test dataset. The experiment result shows that the proposed technique can generate suitable and meaningful road objects from high spatial resolution satellite images. Apparently, misclassified regions such as parking lots are recognized as road needed further refinement in future research.

  • PDF

Building Extraction from Lidar Data and Aerial Imagery using Domain Knowledge about Building Structures

  • Seo, Su-Young
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.3
    • /
    • pp.199-209
    • /
    • 2007
  • Traditionally, aerial images have been used as main sources for compiling topographic maps. In recent years, lidar data has been exploited as another type of mapping data. Regarding their performances, aerial imagery has the ability to delineate object boundaries but omits much of these boundaries during feature extraction. Lidar provides direct information about heights of object surfaces but have limitations with respect to boundary localization. Considering the characteristics of the sensors, this paper proposes an approach to extracting buildings from lidar and aerial imagery, which is based on the complementary characteristics of optical and range sensors. For detecting building regions, relationships among elevation contours are represented into directional graphs and searched for the contours corresponding to external boundaries of buildings. For generating building models, a wing model is proposed to assemble roof surface patches into a complete building model. Then, building models are projected and checked with features in aerial images. Experimental results show that the proposed approach provides an efficient and accurate way to extract building models.

Syntactic and semantic information extraction from NPP procedures utilizing natural language processing integrated with rules

  • Choi, Yongsun;Nguyen, Minh Duc;Kerr, Thomas N. Jr.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.3
    • /
    • pp.866-878
    • /
    • 2021
  • Procedures play a key role in ensuring safe operation at nuclear power plants (NPPs). Development and maintenance of a large number of procedures reflecting the best knowledge available in all relevant areas is a complex job. This paper introduces a newly developed methodology and the implemented software, called iExtractor, for the extraction of syntactic and semantic information from NPP procedures utilizing natural language processing (NLP)-based technologies. The steps of the iExtractor integrated with sets of rules and an ontology for NPPs are described in detail with examples. Case study results of the iExtractor applied to selected procedures of a U.S. commercial NPP are also introduced. It is shown that the iExtractor can provide overall comprehension of the analyzed procedures and indicate parts of procedures that need improvement. The rich information extracted from procedures could be further utilized as a basis for their enhanced management.

Measurement Criteria for Ontology Extraction Tools (온톨로지 자동추출도구의 기능적 성능 평가를 위한 평가지표의 개발 및 적용)

  • Park, Jin-Soo;Cho, Won-Chin;Rho, Sang-Kyu
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.4
    • /
    • pp.69-87
    • /
    • 2008
  • The Web is evolving toward the Semantic Web. Ontologies are considered as a crucial component of the Semantic Web since it is the backbone of knowledge representation for this Web. However, most of these ontologies are still built manually. Manual building of an ontology is time-consuming activity which requires many resources. Consequently, the need for automatic ontology extraction tools has been increased for the last decade, and many tools have been developed for this purpose. Yet, there is no comprehensive framework for evaluating such tools. In this paper, we proposed a set of criteria for evaluating ontology extraction tools and carried out an experiment on four popular ontology extraction tools (i.e., OntoLT, Text-To-Onto, TERMINAE, and OntoBuilder) using our proposed evaluation framework. The proposed framework can be applied as a useful benchmark when developers want to develop ontology extraction tools.

  • PDF