• Title/Summary/Keyword: 온톨로지 시스템

Search Result 796, Processing Time 0.025 seconds

A study for 'Education 2.0' service case and Network Architecture Analysis using convergence technology (융합 기술을 활용한 '교육 2.0' 서비스 사례조사와 네트워크 아키텍처 분석에 관한 연구)

  • Kang, Jang-Mook;Kang, Sung-Wook;Moon, Song-Chul
    • Journal of Digital Contents Society
    • /
    • v.9 no.4
    • /
    • pp.759-769
    • /
    • 2008
  • Convergence technology stimulating participation sharing openness to the public of web 2.0 such as Open-API, Mash-Up, Syndication gives diversity to education field. The convergence in education field means the revolution toward education 2.0 and new education reflecting web 2.0 stream is called 'education 2.0'. Education environment can be the space of social network intimately linked between learners, educators and educational organization. Network technology developed in ontology language makes it possible to educate semantically which understands privatized education service and connection. Especially, filtering system by the reputation system of Amazon and the collective intelligence of Wikipedia are the best samples. Education area can adopt actively because learners as educational main body can broaden their role of participation and communicate bilaterally in the equal position. In this paper, new network architecture in contents linkage is introduced and researched for utilization and analysis of the architecture for web 2.0 technology and educational contents are to be converged. Education 2.0 service utilizing convergence technology and network architecture for realizing education 2.0 is introduced and analyzed so that the research could be a preceding research to the education 2.0 platform foundation.

  • PDF

Improvement of Personalized Diagnosis Method for U-Health (U-health 개인 맞춤형 질병예측 기법의 개선)

  • Min, Byoung-Won;Oh, Yong-Sun
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.10
    • /
    • pp.54-67
    • /
    • 2010
  • Applying the conventional machine-learning method which has been frequently used in health-care area has several fundamental problems for modern U-health service analysis. First of all, we are still lack of application examples of the traditional method for our modern U-health environment because of its short term history of U-health study. Second, it is difficult to apply the machine-learning method to our U-health service environment which requires real-time management of disease because the method spends a lot of time in the process of learning. Third, we cannot implement a personalized U-health diagnosis system using the conventional method because there is no way to assign weights on the disease-related variables although various kinds of machine-learning schemes have been proposed. In this paper, a novel diagnosis scheme PCADP is proposed to overcome the problems mentioned above. PCADP scheme is a personalized diagnosis method and it makes the bio-data analysis just a 'process' in the U-health service system. In addition, we offer a semantics modeling of the U-health ontology framework in order to describe U-health data and service specifications as meaningful representations based on this PCADP. The PCADP scheme is a kind of statistical diagnosis method which has characteristics of flexible structure, real-time processing, continuous improvement, and easy monitoring of decision process. Upto the best of authors' knowledge, the PCADP scheme and ontology framework proposed in this paper reveals one of the best characteristics of flexible structure, real-time processing, continuous improvement, and easy monitoring among recently developed U-health schemes.

Semantic User Profiles Manager based on OSGi (OSGi기반 시맨틱 사용자 프로파일 관리자)

  • Song, Chang-Woo;Kim, Jong-Hun;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.8
    • /
    • pp.9-18
    • /
    • 2008
  • Research is being made for users' convenient access to services such as personalized data and contents services. The use of information and the fusion of services in various devices and terminals suggest the necessity to know what personalization mechanism is used to provide high quality contents at a time and place desired by users. Existing mechanisms are not easy to be handled by other service providers because each service provider has different preference and personal information, and are very inconvenient because service users have to set up and manage by themselves. Thus, the present paper proposes a Semantic User Profiles Manager based on OSGi, middleware for the provision and extension of semantic services, in order to manage users' profiles dynamically regardless of service provider. In addition, this paper defines a personalized semantic profile that enables user profiling, ontological domain modeling and semantic reasoning. In order to test the validity of this paper, we implemented semantic profiles into a bundle running based on OSGi. When users enter the range of the service area and use various devices, the semantic service matches in correspondence with semantic user profiles. The proposed system can easily extend the matching of services to user profiles and matching between user profiles or between services.

A Design and Implementation of Heterogeneous Metadata Searching System using Ontology (Ontology를 이용한 이종 메타데이터 검색 시스템의 설계 및 구현)

  • Choe, Hyun-Jong;Kim, Tae-Young
    • Journal of The Korean Association of Information Education
    • /
    • v.8 no.3
    • /
    • pp.353-360
    • /
    • 2004
  • World Wide Web is not more meaningless sea of information but is becoming the Semantic Web that provides many users with meaningful information. The starting point is the XML and metadata, RDF is a stopover which gives technique to relate arbitrary web resources. And now, the semantic and logic of web resources can be settled in the Ontology. A lot of educational multimedia web resources in Korea have produced their metadata with KERIS's KEM(Korea Educational Metadata). Therefore our country have to start the study of the semantic and logic in web resources. But, many researchers in Korea are more eager to study Dublin Core's DC and SCORM's LOM metadata specification than KEM. Thus the study of method about sharing and integrating these three metadata specifications should be performed before the study of semantic and logic in web resources in Korea. We design the Ontology to integrate these three metadata specifications and implement the prototype system using this Ontology. These three metadata have some elements that have same labels and meanings, and other elements have different labels and same meanings. To match these different labels which have same meanings, we adapted the one-to-one mapping technique in designing our Ontology. This designed Ontology was imported as "integrated schema" in our prototype searching system to integrate three different metadata in databases. Moreover we know that the more specific property design of class in Ontology was needed in order to provide users with more informed searching results such as synonym, antonym, hierarchy and associations.

  • PDF

Representation and Reasoning of User Context Using Fuzzy OWL (Fuzzy OWL을 이용한 사용자 Context의 표현 및 추론)

  • Sohn, Jong-Soo; Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.1
    • /
    • pp.35-45
    • /
    • 2008
  • In order to constructan ubiquitous computing environment, it is necessary to develop a technology that can recognize users and circumstances. In this regard, the question of recognizing and expressing user Context regardless of computer and language types has emerged as an important task under the heterogeneous distributed processing system. As a means to solve this task of representing user Context in the ubiquitous environment, this paper proposes to describe user Context as the most similar form of human thinking by using semantic web and fuzzy concept independentof language and computer types. Because the conventional method of representing Context using an usual collection has some limitations in expressing the environment of the real world, this paper has chosen to use Fuzzy OWL language, a fusion of fuzzy concept and standard web ontology language OWL. Accordingly, this paper suggests the following method. First we represent user contacted environmental information with a numerical value and states, and describe it with OWL. After that we transform the converted OWL Context into Fuzzy OWL. As a last step, we prove whether the automatic circumstances are possible in this procedure when we use fuzzy inference engine FiRE. With use the suggested method in this paper, we can describe Context which can be used in the ubiquitous computing environment. This method is more effective in expressing degree and status of the Context due to using fuzzy concept. Moreover, on the basis of the stated Context we can also infer the user contacted status of the environment. It is also possible to enable this system to function automatically in compliance with the inferred state.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

Necessity of Standardization and Standardized Method for Substances Accounting of Environmental Liability Insurance (환경책임보험 배출 물질 정산의 표준화 필요성 및 산출방법 표준화)

  • Park, Myeongnam;Kim, Chang-wan;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.22 no.5
    • /
    • pp.1-17
    • /
    • 2018
  • Related incidents and accidents are frequent after 2000 years, such as the outbreak of the Taian peninsula crude oil spillage and Gumi hydrofluoric acid leakage accident. In the wake of such environmental pollution accidents, Consensus has been formed to enact legislation on liability for the compensation of environmental pollution in 2014 and the rescue, and has been in force since January 2016. Therefore, in the domestic insurance industry, the introduced environmental liability insurance system needs to be managed through the standardization formula of a new insurance model for managing the environmental risk. This study has been carried out by the emergence of a safe insurance model with a risky nature of the risk type, which is one of the services of the knowledge base. The verification of the six assurance media on the occurrence of environmental pollution such as chemical, waste, marine, soil, etc. is expressed through semantic interoperability through this possible ontology. The insurance model was designed and presented by deducing the relationship between the amount of money and the amount of money that was written in the area of existing expertise, In order to exclude the possible consequences, the concept of abstract is conceptualized in the form of a customer, and a plan for the future development of an ontology-based decision support system is proposed to reduce the cost and resources consumed every year. It is expected that standardization of the verification standard of the mass of mass will minimize errors and reduce the time and resources required for verification.

A Semantic-Based Mashup Development Tool Supporting Various Open API Types (다양한 Open API 타입들을 지원하는 시맨틱 기반 매쉬업 개발 툴)

  • Lee, Yong-Ju
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.115-126
    • /
    • 2012
  • Mashups have become very popular over the last few years, and their use also varies for IT convergency services. In spite of their popularity, there are several challenging issues when combining Open APIs into mashups, First, since portal sites may have a large number of APIs available for mashups, manually searching and finding compatible APIs can be a tedious and time-consuming task. Second, none of the existing portal sites provides a way to leverage semantic techniques that have been developed to assist users in locating and integrating APIs like those seen in traditional SOAP-based web services. Third, although suitable APIs have been discovered, the integration of these APIs is required for in-depth programming knowledge. To solve these issues, we first show that existing techniques and algorithms used for finding and matching SOAP-based web services can be reused, with only minor changes. Next, we show how the characteristics of APIs can be syntactically defined and semantically described, and how to use the syntactic and semantic descriptions to aid the easy discovery and composition of Open APIs. Finally, we propose a goal-directed interactive approach for the dynamic composition of APIs, where the final mashup is gradually generated by a forward chaining of APIs. At each step, a new API is added to the composition.

Technique for Concurrent Processing Graph Structure and Transaction Using Topic Maps and Cassandra (토픽맵과 카산드라를 이용한 그래프 구조와 트랜잭션 동시 처리 기법)

  • Shin, Jae-Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.159-168
    • /
    • 2012
  • Relation in the new IT environment, such as the SNS, Cloud, Web3.0, has become an important factor. And these relations generate a transaction. However, existing relational database and graph database does not processe graph structure representing the relationships and transactions. This paper, we propose the technique that can be processed concurrently graph structures and transactions in a scalable complex network system. The proposed technique simultaneously save and navigate graph structures and transactions using the Topic Maps data model. Topic Maps is one of ontology language to implement the semantic web(Web 3.0). It has been used as the navigator of the information through the association of the information resources. In this paper, the architecture of the proposed technique was implemented and design using Cassandra - one of column type NoSQL. It is to ensure that can handle up to Big Data-level data using distributed processing. Finally, the experiments showed about the process of storage and query about typical RDBMS Oracle and the proposed technique to the same data source and the same questions. It can show that is expressed by the relationship without the 'join' enough alternative to the role of the RDBMS.