• Title/Summary/Keyword: Knowledge extraction

Search Result 384, Processing Time 0.027 seconds

Improving methods for normalizing biomedical text entities with concepts from an ontology with (almost) no training data at BLAH5 the CONTES

  • Ferre, Arnaud;Ba, Mouhamadou;Bossy, Robert
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.20.1-20.5
    • /
    • 2019
  • Entity normalization, or entity linking in the general domain, is an information extraction task that aims to annotate/bind multiple words/expressions in raw text with semantic references, such as concepts of an ontology. An ontology consists minimally of a formally organized vocabulary or hierarchy of terms, which captures knowledge of a domain. Presently, machine-learning methods, often coupled with distributional representations, achieve good performance. However, these require large training datasets, which are not always available, especially for tasks in specialized domains. CONTES (CONcept-TErm System) is a supervised method that addresses entity normalization with ontology concepts using small training datasets. CONTES has some limitations, such as it does not scale well with very large ontologies, it tends to overgeneralize predictions, and it lacks valid representations for the out-of-vocabulary words. Here, we propose to assess different methods to reduce the dimensionality in the representation of the ontology. We also propose to calibrate parameters in order to make the predictions more accurate, and to address the problem of out-of-vocabulary words, with a specific method.

An intelligent health monitoring method for processing data collected from the sensor network of structure

  • Ghiasi, Ramin;Ghasemi, Mohammad Reza
    • Steel and Composite Structures
    • /
    • v.29 no.6
    • /
    • pp.703-716
    • /
    • 2018
  • Rapid detection of damages in civil engineering structures, in order to assess their possible disorders and as a result produce competent decision making, are crucial to ensure their health and ultimately enhance the level of public safety. In traditional intelligent health monitoring methods, the features are manually extracted depending on prior knowledge and diagnostic expertise. Inspired by the idea of unsupervised feature learning that uses artificial intelligence techniques to learn features from raw data, a two-stage learning method is proposed here for intelligent health monitoring of civil engineering structures. In the first stage, $Nystr{\ddot{o}}m$ method is used for automatic feature extraction from structural vibration signals. In the second stage, Moving Kernel Principal Component Analysis (MKPCA) is employed to classify the health conditions based on the extracted features. In this paper, KPCA has been implemented in a new form as Moving KPCA for effectively segmenting large data and for determining the changes, as data are continuously collected. Numerical results revealed that the proposed health monitoring system has a satisfactory performance for detecting the damage scenarios of a three-story frame aluminum structure. Furthermore, the enhanced version of KPCA methods exhibited a significant improvement in sensitivity, accuracy, and effectiveness over conventional methods.

Interventions on Well-being, Occupational Health, and Aging of Healthcare Workers: A Scoping Review of Systematic Reviews

  • Marc Fadel;Yves Roquelaure;Alexis Descatha
    • Safety and Health at Work
    • /
    • v.14 no.1
    • /
    • pp.135-140
    • /
    • 2023
  • Introduction: With recent higher awareness of the necessity of improving healthcare workers' wellbeing, we aimed to overview systematic reviews dealing with interventions on well-being, occupational health, and aging of healthcare workers. Methods: From three databases (PubMed, Embase, and Web of Science), a scoping review of systematic reviews was carried out to determine current knowledge on interventions focused on the well-being or aging of healthcare workers. Only systematic reviews were considered, with appropriate extraction and quality evaluation. Results: Of the total of 445 references identified, 10 systematic reviews were included, mostly published since 2019. Nurses were the most frequent targets of interventions, and mental health was the main outcome described. The overall level of quality was also heterogenous, with high to low-quality reviews. Conclusions: Workers' mental health well-being was the major outcome targeted by intervention, with varying level of evidence. Further studies are needed with integrative approaches on global health and life course perspectives, with a focus on the plurality of settings, worker types, and women.

Research on Equal-resolution Image Hiding Encryption Based on Image Steganography and Computational Ghost Imaging

  • Leihong Zhang;Yiqiang Zhang;Runchu Xu;Yangjun Li;Dawei Zhang
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.270-281
    • /
    • 2024
  • Information-hiding technology is introduced into an optical ghost imaging encryption scheme, which can greatly improve the security of the encryption scheme. However, in the current mainstream research on camouflage ghost imaging encryption, information hiding techniques such as digital watermarking can only hide 1/4 resolution information of a cover image, and most secret images are simple binary images. In this paper, we propose an equal-resolution image-hiding encryption scheme based on deep learning and computational ghost imaging. With the equal-resolution image steganography network based on deep learning (ERIS-Net), we can realize the hiding and extraction of equal-resolution natural images and increase the amount of encrypted information from 25% to 100% when transmitting the same size of secret data. To the best of our knowledge, this paper combines image steganography based on deep learning with optical ghost imaging encryption method for the first time. With deep learning experiments and simulation, the feasibility, security, robustness, and high encryption capacity of this scheme are verified, and a new idea for optical ghost imaging encryption is proposed.

Personalized Media Control Method using Probabilistic Fuzzy Rule-based Learning (확률적 퍼지 룰 기반 학습에 의한 개인화된 미디어 제어 방법)

  • Lee, Hyong-Euk;Kim, Yong-Hwi;Lee, Tae-Youb;Park, Kwang-Hyun;Kim, Yong-Soo;Cho, Joon-Myun;Bien, Z. Zenn
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.244-251
    • /
    • 2007
  • Intention reading technique is essential to provide personalized services toward more convenient and human-friendly services in complex ubiquitous environment such as a smart home. If a system has knowledge about an user's intention of his/her behavioral pattern, the system can provide mote qualified and satisfactory services automatically in advance to the user's explicit command. In this sense, learning capability is considered as a key function for the intention reading technique in view of knowledge discovery. In this paper, ore introduce a personalized media control method for a possible application iii a smart home. Note that data pattern such as human behavior contains lots of inconsistent data due to limitation of feature extraction and insufficiently available features, where separable data groups are intermingled with inseparable data groups. To deal with such a data pattern, we introduce an effective engineering approach with the combination of fuzzy logic and probabilistic reasoning. The proposed learning system, which is based on IFCS (Iterative Fuzzy Clustering with Supervision) algorithm, extract probabilistic fuzzy rules effectively from the given numerical training data pattern. Furthermore, an extended architectural design methodology of the learning system incorporating with the IFCS algorithm are introduced. Finally, experimental results of the media contents recommendation system are given to show the effectiveness of the proposed system.

Semantic Representation and Translation of Electronic Product Code(EPC) data in EPC Network (EPC 네트워크의 전자물품코드(EPC) 데이터 의미표현과 해석)

  • Park, Dae-Won;Kwon, Hyuk-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.70-81
    • /
    • 2009
  • Ontology is an explicit specification of concepts and relationships between concepts in an interest domain. As considered as one of typical knowledge representation methods, ontology is applied to various studies such as information extraction, information integration, information sharing, or knowledge management. In IT based industries, ontology is applied to research on information integration and sharing in order to enhance interoperability between enterprises. In supply chains or logistics, several enterprises participate as business partners to plan movements of goods, and control goods and logistics flows. A number of researches on information integration and sharing for the effective and efficient management of logistics or supply chains have been addressed. In this paper, we address an ontology as a knowledge-base for semantic-based integration of logistics information distributed in the logistics flow. Especially, we focus on developing an ontology that enables to represent and translate semantic meaning of EPC data in the EPC Network applied logistics. We present a scenario for tracing products in logistics in order to show the value of our ontology.

An Automatic Business Service Identification for Effective Relevant Information Retrieval of Defense Digital Archive (국방 디지털 아카이브의 효율적 연관정보 검색을 위한 자동화된 비즈니스 서비스 식별)

  • Byun, Young-Tae;Hwang, Sang-Kyu;Jung, Chan-Ki
    • Journal of the Korean Society for information Management
    • /
    • v.27 no.4
    • /
    • pp.33-47
    • /
    • 2010
  • The growth of IT technology and the popularity of network based information sharing increase the number of digital contents in military area. Thus, there arise issues of finding suitable public information with the growing number of long-term preservation of digital public information. According to the source of raw data and the time of compilation may be variable and there can be existed in many correlations about digital contents. The business service ontology makes knowledge explicit and allows for knowledge sharing among information provider and information consumer for public digital archive engaged in improving the searching ability of digital public information. The business service ontology is at the interface as a bridge between information provider and information consumer. However, according to the difficulty of semantic knowledge extraction for the business process analysis, it is hard to realize the automation of constructing business service ontology for mapping from unformed activities to a unit of business service. To solve the problem, we propose a new business service auto-acquisition method for the first step of constructing a business service ontology based on Enterprise Architecture.

KMSCR: A system for managing knowledge assets of an IT consulting firm (IT 컨설팅 회사의 지적 자산 관리를 위한 지식관리시스템)

  • 김수연;황현석;서의호
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.06a
    • /
    • pp.233-239
    • /
    • 2001
  • 최근 대부분의 회사들은 업무를 수행하는데 필요한 지식과 노하우를 공유하고 재사용하기 위하여 지적 자산 관리의 중요성을 인식하고 있다. 특히 고도로 지식 집약적인 업종이라 할 수 있는 IT컨설팅 회사에서는 지적 자산의 관리가 다른 어떤 회사에서보다 큰 중요성을 가지게 된다. 컨설팅 회사에 있어서 검증이 완료된 지적 자산의 공유 및 지능적이면서도 신속한 검색은 컨설팅 서비스의 품질과 고객 만족에 직결되는 중요한 요소이다. 따라서 대부분의 컨설팅 회사들은 자사의 지식 자산을 관리하기 위하여 많은 노력을 기울이고 있다. 본 논문의 목적은 IT 컨설팅 회사예서 관리되는 다양한 형태의 지적 자산들을 중앙 관리하여 설친 고객 사이트에 흩어져 프로젝트를 수행하는 컨설턴트들이 공유할 수 있도록 함으로써 컨설팅 서비스의 생산성과 품질들 높이고자 하는데 있다 이를 위하여 건설팅 회사에서 관리되는 모든 지적 자산의 재고를 조사하여 모델링하고 이를 쉽게 저장하고 검색할 수 있는 시스템 아키텍처를 제안한다. 제안된 아키텍처를 NT 기반에서 Index server를 이용하여 시스템으로 구현하였다 (KMSCR: A Knowledge Management System for managing Consulting Resources). KMSCR에서는 컨설턴트가 찾고자 하는 검색어를 입력하면 다양한 포맷의 (.doc, .ppt, xls, .rtf, .txt, .html 등과 같은) 결과물을 관련성이 높은 순서대로 출력해 줌으로써 컨설팅 리소스를 효과적으로 재사용할 수 있도록 도와 준다. 또한 검색 시에는 미리 등록된 키워드 뿐 아니라 본문 내의 텍스트 검색까지 가능하게 함으로써 컨설팅 리소스에 대한 보다 효과적이고 효율적인 검색을 가능하게 한다.간을 성능 평가 인자로 하여 수행하였다. 논문에서 제한된 방법을 적용한 개선된 RICH-DP을 모의 실험을 통하여 분석한 결과 기존의 제한된 RICH-DP는 실시간 서비스에 대한 처리율이 낮아지며 서비스 시간이 보장되지 못했다. 따라서 실시간 서비스에 대한 새로운 제안된 기법을 제안하고 성능 평가한 결과 기존의 RICH-DP보다 성능이 향상됨을 확인 할 수 있었다.(actual world)에서 가상 관성 세계(possible inertia would)로 변화시켜서, 완수동사의 종결점(ending point)을 현실세계에서 가상의 미래 세계로 움직이는 역할을 한다. 결과적으로, IMP는 완수동사의 닫힌 완료 관점을 현실세계에서는 열린 미완료 관점으로 변환시키되, 가상 관성 세계에서는 그대로 닫힌 관점으로 유지 시키는 효과를 가진다. 한국어와 영어의 관점 변환 구문의 차이는 각 언어의 지속부사구의 어휘 목록의 전제(presupposition)의 차이로 설명된다. 본 논문은 영어의 지속부사구는 논항의 하위간격This paper will describe the application based on this approach developed by the authors in the FLEX EXPRIT IV n$^{\circ}$EP29158 in the Work-package "Knowledge Extraction & Data mining"where the information captured from digital newspapers is extracted and reused in tourist information context.terpolation performance of CNN was relatively

  • PDF

Optimization and Performance Analysis of Distributed Parallel Processing Platform for Terminology Recognition System (전문용어 인식 시스템을 위한 분산 병렬 처리 플랫폼 최적화 및 성능평가)

  • Choi, Yun-Soo;Lee, Won-Goo;Lee, Min-Ho;Choi, Dong-Hoon;Yoon, Hwa-Mook;Song, Sa-kwang;Jung, Han-Min
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.10
    • /
    • pp.1-10
    • /
    • 2012
  • Many statistical methods have been adapted for terminology recognition to improve its accuracy. However, since previous studies have been carried out in a single core or a single machine, they have difficulties in real-time analysing explosively increasing documents. In this study, the task where bottlenecks occur in the process of terminology recognition is classified into linguistic processing in the process of 'candidate terminology extraction' and collection of statistical information in the process of 'terminology weight assignment'. A terminology recognition system is implemented and experimented to address each task by means of the distributed parallel processing-based MapReduce. The experiments were performed in two ways; the first experiment result revealed that distributed parallel processing by means of 12 nodes improves processing speed by 11.27 times as compared to the case of using a single machine and the second experiment was carried out on 1) default environment, 2) multiple reducers, 3) combiner, and 4) the combination of 2)and 3), and the use of 3) showed the best performance. Our terminology recognition system contributes to speed up knowledge extraction of large scale science and technology documents.

The thickness of facial and palatal bone of maxillary anterior natural teeth: radiographic analysis using computed tomography (전산화 단층 촬영을 이용한 상악 전치부 자연치의 순측과 구개측 골의 두께 계측)

  • Bae, Soo-Yong;Park, Jung-Chul;Sohn, Joo-Yeon;Um, Yoo-Jung;Jung, Ui-Won;Kim, Chang-Sung;Cho, Kyoo-Sung;Chai, Jung-Kiu;Kim, Chong-Kwan;Choi, Seong-Ho
    • The Journal of the Korean dental association
    • /
    • v.47 no.10
    • /
    • pp.669-676
    • /
    • 2009
  • Purpose : Anterior region is crucial area for esthetic implant restoration. However, the alveolar process undergoes atrophy after removal of teeth and creates unfavorable situation for implant installation. The knowledge of the thickness of alveolar bone is required to estimate and expect the bone resorption after extraction. The aim of this study is to measure facial, palatal and faciopalatal bone thickness on maxillary anterior teeth. Methods : Facial, palatal, and faciopalatal bone thickness were measured on the computed tomography (CT) images from 57 patients, using an image analyzer program (Ondemand$3D^{(R)}$, Cybermed, Seoul, Korea). Results : The thickness of facial bone in incisors, lateral incisors and canines were less than 1 mm. The thickness of facial bone increased from anterior to posterior region and the thickness of palatal bone increased from posterior to anterior region. Conclusion : The measurement can be used for planning implant surgery before extraction. CT has are clinically useful in the evaluation of thickness of alveolar bone.

  • PDF