• Title/Summary/Keyword: 학습 온톨로지

Search Result 96, Processing Time 0.02 seconds

An Intelligent Context-Awareness Middleware for Service Adaptation based on Fuzzy Inference (퍼지 추론 기반 서비스 적응을 위한 지능형 상황 인식 미들웨어)

  • Ahn, Hyo-In;Yoon, Seok-Hwan;Yoon, Yong-Ik
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.281-286
    • /
    • 2007
  • This paper proposes an intelligent context awareness middleware(ICAM) for Ubiquitous Computing Environment. In this paper we have researched about the context awareness middleware. The ICAM model is based on ontology that efficiently manages analyses and learns about various context information and can provide intelligent services that satisfy the human requirements. Therefore, various intelligent services will improve user's life environment. We also describe the current implementation of the ICAM for service adaptation based on fuzzy inference that help applications to adapt their ubiquitous computing environments according to rapidly changing. For this, after defining the requirements specifications of ICAM, we have researched the inferred processes for the higher level of context awareness. The Fuzzy Theory has been used in process of inferences, and showed constructing the model through the service process. Also, the proposed fuzzy inferences has been applied to smart Jacky, and after inferring the fuzzy values according to the change of temperature, showed the adaptability of Smart Jacky according to the change of surroundings like temperature as showing the optimal value of status.

Recommendation Method for 3D Visualization Technology-based Automobile Parts (3D 가시화기술 기반 자동차 부품 추천 방법)

  • Kim, Gui-Jung;Han, Jung-Soo
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.185-192
    • /
    • 2013
  • The purpose of this study is to set the relationship between each parts that forms the engine of an automobile based on the 3D visualization technology which is able to be learned according to the skill of the operator in the industry field and to recommend the auto parts using a task ontology. A visualization method was proposed by structuring the complex knowledge by signifying the link and the node in forms of a network and using SOM which can be shown in the form of 3 dimension. In addition, by using is-a Relationship-based hierarchical Taxonomy setting the relationship between each of the parts that forms the engine of an automobile, to allow a recommendation using a weighted value possible. By providing and placing the complex knowledge in the 3D space to the user for an opportunity of more realistic and intuitive navigation, when randomly selecting the automobile parts, it allows the recommendation of the parts having a close relationship with the corresponding parts for easy assembly and to know the importance of usage for the automobile parts without any special expertise.

Semantic-based Automatic Open API Composition Algorithm for Easier-to-use Mashups (Easier-to-use 매쉬업을 위한 시맨틱 기반 자동 Open API 조합 알고리즘)

  • Lee, Yong Ju
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.5
    • /
    • pp.359-368
    • /
    • 2013
  • Mashup is a web application that combines several different sources to create new services using Open APIs(Application Program Interfaces). Although the mashup has become very popular over the last few years, there are several challenging issues when combining a large number of APIs into the mashup, especially when composite APIs are manually integrated by mashup developers. This paper proposes a novel algorithm for automatic Open API composition. The proposed algorithm consists of constructing an operation connecting graph and searching composition candidates. We construct an operation connecting graph which is based on the semantic similarity between the inputs and the outputs of Open APIs. We generate directed acyclic graphs (DAGs) that can produce the output satisfying the desired goal. In order to produce the DAGs efficiently, we rapidly filter out APIs that are not useful for the composition. The algorithm is evaluated using a collection of REST and SOAP APIs extracted from ProgrammableWeb.com.

A study for 'Education 2.0' service case and Network Architecture Analysis using convergence technology (융합 기술을 활용한 '교육 2.0' 서비스 사례조사와 네트워크 아키텍처 분석에 관한 연구)

  • Kang, Jang-Mook;Kang, Sung-Wook;Moon, Song-Chul
    • Journal of Digital Contents Society
    • /
    • v.9 no.4
    • /
    • pp.759-769
    • /
    • 2008
  • Convergence technology stimulating participation sharing openness to the public of web 2.0 such as Open-API, Mash-Up, Syndication gives diversity to education field. The convergence in education field means the revolution toward education 2.0 and new education reflecting web 2.0 stream is called 'education 2.0'. Education environment can be the space of social network intimately linked between learners, educators and educational organization. Network technology developed in ontology language makes it possible to educate semantically which understands privatized education service and connection. Especially, filtering system by the reputation system of Amazon and the collective intelligence of Wikipedia are the best samples. Education area can adopt actively because learners as educational main body can broaden their role of participation and communicate bilaterally in the equal position. In this paper, new network architecture in contents linkage is introduced and researched for utilization and analysis of the architecture for web 2.0 technology and educational contents are to be converged. Education 2.0 service utilizing convergence technology and network architecture for realizing education 2.0 is introduced and analyzed so that the research could be a preceding research to the education 2.0 platform foundation.

  • PDF

A Study of a Semantic Web Driven Architecture in Information Retrieval: Developing an Exploratory Discovery Model Using Ontology and Social Tagging (정보검색의 시맨틱웹 지향 설계에 관한 연구 - 온톨로지와 소셜태깅을 활용한 탐험적 발견행위 모델개발을 중심으로 -)

  • Cho, Myung-Dae
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.21 no.3
    • /
    • pp.151-163
    • /
    • 2010
  • It is necessary, due to changes in the information environment, to investigate problems in existing information retrieval systems. Ontologies and social tagging, which are a relatively new means of information organization, enable exploratory discovery of information. These two connect a thought of a user with the thoughts of numerous other people on the Internet. With these connection chains through the interactions, users are foraging information actively and exploratively. Thus, the purpose of this study is, through qualitative research methods, to identify numerous discovery facilitators provided by ontologies and social tagging, and to create an exploratory discovery model based on them. The results show that there are three uppermost categories in which 5, 4 and 4 subcategories are enumerated respectively. The first category, 'Browsing and Monitoring,' has 5 sub categories: Noticing the Needs, Being Aware, Perceiving, Stopping, and Examining a Resource. The second category, Actively Participating, has 4 categories: Constructing Meaning, Social Bookmarking and Tagging, Sharing on Social Networking, Specifying the Original Needs. The third category, Actively Extends Thinking, also has 4 categories: Social Learning, Emerging Fortuitous Discovery, Creative Thinking, Enhancing Problem Solving Abilities. This model could contribute to the design of information systems, which enhance the ability of exploratory discovery.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.