• Title/Summary/Keyword: semantic inference

Search Result 139, Processing Time 0.023 seconds

Usefulness of RDF/OWL Format in Pediatric and Oncologic Nuclear Medicine Imaging Reports (소아 및 종양 핵의학 영상판독에서 RDF/OWL 데이터의 유용성)

  • Hwang, Kyung Hoon;Lee, Haejun;Koh, Geon;Choi, Duckjoo;Sun, Yong Han
    • Journal of Biomedical Engineering Research
    • /
    • v.36 no.4
    • /
    • pp.128-134
    • /
    • 2015
  • Recently, the structured data format in RDF/OWL has played an increasingly vital role in the semantic web. We converted pediatric and oncologic nuclear medicine imaging reports in free text into RDF/OWL format and evaluated the usefulness of nuclear medicine imaging reports in RDF/OWL by comparing SPARQL query results with the manually retrieved results by physicians from the reports in free text. SPARQL query showed 95% recall for simple queries and 91% recall for dedicated queries. In total, SPARQL query retrieved 93% (51 lesions of 55) recall and 100% precision for 20 clinical query items. All query results missed by SPARQL query were of some inference. Nuclear medicine imaging reports in the format of RDF/OWL were very useful for retrieving simple and dedicated query results using SPARQL query. Further study using more number of cases and knowledge for inference is warranted.

Icefex: Protocol Format Extraction from IL-based Concolic Execution

  • Pan, Fan;Wu, Li-Fa;Hong, Zheng;Li, Hua-Bo;Lai, Hai-Guang;Zheng, Chen-Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.576-599
    • /
    • 2013
  • Protocol reverse engineering is useful for many security applications, including intelligent fuzzing, intrusion detection and fingerprint generation. Since manual reverse engineering is a time-consuming and tedious process, a number of automatic techniques have been proposed. However, the accuracy of these techniques is limited due to the complexity of binary instructions, and the derived formats have missed constraints that are critical for security applications. In this paper, we propose a new approach for protocol format extraction. Our approach reasons about only the evaluation behavior of a program on the input message from concolic execution, and enables field identification and constraint inference with high accuracy. Moreover, it performs binary analysis with low complexity by reducing modern instruction sets to BIL, a small, well-specified and architecture-independent language. We have implemented our approach into a system called Icefex and evaluated it over real-world implementations of DNS, eDonkey, FTP, HTTP and McAfee ePO protocols. Experimental results show that our approach is more accurate and effective at extracting protocol formats than other approaches.

Automatic Detection of Off-topic Documents using ConceptNet and Essay Prompt in Automated English Essay Scoring (영어 작문 자동채점에서 ConceptNet과 작문 프롬프트를 이용한 주제-이탈 문서의 자동 검출)

  • Lee, Kong Joo;Lee, Gyoung Ho
    • Journal of KIISE
    • /
    • v.42 no.12
    • /
    • pp.1522-1534
    • /
    • 2015
  • This work presents a new method that can predict, without the use of training data, whether an input essay is written on a given topic. ConceptNet is a common-sense knowledge base that is generated automatically from sentences that are extracted from a variety of document types. An essay prompt is the topic that an essay should be written about. The method that is proposed in this paper uses ConceptNet and an essay prompt to decide whether or not an input essay is off-topic. We introduce a way to find the shortest path between two nodes on ConceptNet, as well as a way to calculate the semantic similarity between two nodes. Not only an essay prompt but also a student's essay can be represented by concept nodes in ConceptNet. The semantic similarity between the concepts that represent an essay prompt and the other concepts that represent a student's essay can be used for a calculation to rank "on-topicness" ; if a low ranking is derived, an essay is regarded as off-topic. We used eight different essay prompts and a student-essay collection for the performance evaluation, whereby our proposed method shows a performance that is better than those of the previous studies. As ConceptNet enables the conduction of a simple text inference, our new method looks very promising with respect to the design of an essay prompt for which a simple inference is required.

Domain-specific Ontology Construction by Terminology Processing (전문용어의 처리에 의한 도메인 온톨로지의 구축)

  • 임수연;송무희;이상조
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.353-360
    • /
    • 2004
  • Ontology defines the terms used in a specific domain and the relationships between them and represents them as hierarchical taxonomy. The present paper proposes a semi-automatic domain-specific ontology construction method based on terminology Processing. For this purpose, it presents an algorithm to extract terminology according to the noun/suffix pattern of terminology in domain texts and find their hierarchical structure. The experiment was carried out using pharmacy-related documents. As singleton terminology with noun/suffix were identified, the average accuracy was 92.57%. In case of multi-word terminology, the average accuracy was 66.64%. The constructed ontology forms natural semantic clusters with based on suffices and semantic information, so can be utilized in approaches to specific knowledge such as information look-up or as the base of inference to improve searching abilities.

Learning Similarity with Probabilistic Latent Semantic Analysis for Image Retrieval

  • Li, Xiong;Lv, Qi;Huang, Wenting
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1424-1440
    • /
    • 2015
  • It is a challenging problem to search the intended images from a large number of candidates. Content based image retrieval (CBIR) is the most promising way to tackle this problem, where the most important topic is to measure the similarity of images so as to cover the variance of shape, color, pose, illumination etc. While previous works made significant progresses, their adaption ability to dataset is not fully explored. In this paper, we propose a similarity learning method on the basis of probabilistic generative model, i.e., probabilistic latent semantic analysis (PLSA). It first derives Fisher kernel, a function over the parameters and variables, based on PLSA. Then, the parameters are determined through simultaneously maximizing the log likelihood function of PLSA and the retrieval performance over the training dataset. The main advantages of this work are twofold: (1) deriving similarity measure based on PLSA which fully exploits the data distribution and Bayes inference; (2) learning model parameters by maximizing the fitting of model to data and the retrieval performance simultaneously. The proposed method (PLSA-FK) is empirically evaluated over three datasets, and the results exhibit promising performance.

A Parallel Speech Recognition Model on Distributed Memory Multiprocessors (분산 메모리 다중프로세서 환경에서의 병렬 음성인식 모델)

  • 정상화;김형순;박민욱;황병한
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.44-51
    • /
    • 1999
  • This paper presents a massively parallel computational model for the efficient integration of speech and natural language understanding. The phoneme model is based on continuous Hidden Markov Model with context dependent phonemes, and the language model is based on a knowledge base approach. To construct the knowledge base, we adopt a hierarchically-structured semantic network and a memory-based parsing technique that employs parallel marker-passing as an inference mechanism. Our parallel speech recognition algorithm is implemented in a multi-Transputer system using distributed-memory MIMD multiprocessors. Experimental results show that the parallel speech recognition system performs better in recognition accuracy than a word network-based speech recognition system. The recognition accuracy is further improved by applying code-phoneme statistics. Besides, speedup experiments demonstrate the possibility of constructing a realtime parallel speech recognition system.

  • PDF

Ontology Parser Design for Speed Improvement of Ontology Parsing (온톨로지 파싱 속도향상을 위한 온톨로지 파서 설계)

  • Kim, Won-Pil;Kong, Hyun-Jang
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.4
    • /
    • pp.96-101
    • /
    • 2010
  • The core study of semantic web is the efficiency of ontology parsing. The ontology parsing and inference is based on the significant information retrieval which is the ultimate purpose of semantic web. However, most existing ontology writing tools were not processing the efficient ontology parsing. Therefore, we design the two steps ontology parser for extracting the all facts, are included in the ontology, more fast in this study. In the first step, the token extractor collects the all tokens of ontology and the triple extractor extracts the statements in the collected tokens. In conclusion, we confirm that which is designed in this study, processes the ontology parsing more faster than the existing ontology parsers.

A Study of Ontology-based Context Modeling in the Area of u-Convention (온톨로지 기반 상황인지 모델링 연구: u-Convention을 중심으로)

  • Kim, Sung-Hyuk
    • Journal of the Korean Society for information Management
    • /
    • v.28 no.3
    • /
    • pp.123-139
    • /
    • 2011
  • Context-awareness as a key technology of ubiquitous computing needs a context model that understands and processes situational information coming from diverse sensors and devices, and can be applied diversely in various domains. Semantic web based ontologies use structured standard format and express meaning of information, so it is possible to recognize effectively context-awareness situations, allowing the system to share information and understand situation by inference. In this paper, we propose a layered ontology model to support generality and scaleability of the context-awareness system, and applied the model to u-Convention domain. In addition, we propose a effective reasoning method to handle compound situation by combining OWL-DL and SWRL rules.

A Study on the Development of Ontology based on the Jewelry Brand Information (귀금속.보석 상품정보 온톨로지 구축에 관한 연구)

  • Lee, Ki-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.247-256
    • /
    • 2008
  • This research is to develop product retrieval system through simplified communication by applying intelligent agent technology based on automatically created domain ontology to present solution on problems with e-commerce system which searches in the web documents with a simple keyword. Ontology development extracts representative term based on classification information of international product classification code(UNSPSC) and jewelry websites that is applied to analogy relationship thesaurus to establish standardized ontology. The intelligent agent technology is applied to retrieval stage to support efficiency of information collection for users by designing and developing e-commerce system supported with semantic web. Moreover, it designs user profile to personalized search environment and provide personalized retrieval agent and retrieval environment with inference function to make available with fast information collection and accurate information search.

  • PDF

Modern Methods of Text Analysis as an Effective Way to Combat Plagiarism

  • Myronenko, Serhii;Myronenko, Yelyzaveta
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.242-248
    • /
    • 2022
  • The article presents the analysis of modern methods of automatic comparison of original and unoriginal text to detect textual plagiarism. The study covers two types of plagiarism - literal, when plagiarists directly make exact copying of the text without changing anything, and intelligent, using more sophisticated techniques, which are harder to detect due to the text manipulation, like words and signs replacement. Standard techniques related to extrinsic detection are string-based, vector space and semantic-based. The first, most common and most successful target models for detecting literal plagiarism - N-gram and Vector Space are analyzed, and their advantages and disadvantages are evaluated. The most effective target models that allow detecting intelligent plagiarism, particularly identifying paraphrases by measuring the semantic similarity of short components of the text, are investigated. Models using neural network architecture and based on natural language sentence matching approaches such as Densely Interactive Inference Network (DIIN), Bilateral Multi-Perspective Matching (BiMPM) and Bidirectional Encoder Representations from Transformers (BERT) and its family of models are considered. The progress in improving plagiarism detection systems, techniques and related models is summarized. Relevant and urgent problems that remain unresolved in detecting intelligent plagiarism - effective recognition of unoriginal ideas and qualitatively paraphrased text - are outlined.