• Title/Summary/Keyword: RDF Content

Search Result 40, Processing Time 0.023 seconds

A GIS Search Technique through Reduction of Digital Map and Ontologies

  • Kim, Bong-Je;Shin, Seong-Hyun;Hwang, Hyun-Suk;Kim, Chang-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1681-1688
    • /
    • 2006
  • GIS systems have gradually been utilized in life information as well as special businesses such as traffic, sight-seeing, tracking, and disaster services. Most GIS services focus on showing stored information on maps, not providing a service to register and modify their preferred information. In this paper, we present a new method which reduces DXF map data into Simple Geographic Information File format using format conversion algorithms. We also present the prototype implementation of a GIS search system based on ontologies to support associated information. Our contribution is to propose a new digital map format to provide a fast map loading service and individual customized information on the map service.

  • PDF

A Study on Application of Semantic Web for e-Learning (시멘틱 웹의 e-Learning 적용에 대한 연구)

  • 정의석;김현철
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.589-591
    • /
    • 2003
  • 현재 대부분 e-Learning에서 이루어지고 있는 교육은 학습(Loaming)이 아닌 단순 훈련(Trainning)만이 이루어지고 있다. e-Learning에서 진정한 학습이 이루어지기 위해서는 학습자의 수준에 맞는 적응적(Adaptive), 적시적(Just-in-Time) 학습이 단편적이 아닌 연속적, 통합적으로 이루어져야 한다. 이를 위해서는 기술적 관점뿐만 아니라, 발견적 학습(heuristic learning)관점에서 학습자원이 기술되고, 컴퓨터(에이전트)가 학습자원의 구성요소인 학습목표(Goal), 학습내용(Content), 학습맥락(Context), 학습구조(Structure), 학습전략(Strategy)의 의미(Semantic)와 관계(Relation)를 이해해 학습자에게 필요한 정보만을 검색, 추론해주고 이를 학습자 수준에 맞게 재가공해 학습자에게 지식(Knowledge)을 적응적(Adaptive), 적시적(Just-in-Time)으로 전달해주는 e-Learning 학습 환경이 필수적이다. 메타데이터(RDF), 온톨로지(Ontology), 에이전트(Agent) 매커니즘의 시멘틱 웹을 e-Learning 환경에 적용함으로써 학습자원의 구성요소의 의미와 관계를 파악해 적응적(Adaptive)으로 지식을 전달해 주어 자기 주도적 학습(Self-directed Loaming)을 실현해 줄 수 있다.

  • PDF

Nutritional Evaluation of Tofu Containing Dried Soymilk Residue(DSR) 2. Evaluation of Carbohydrate Quality (건조비지 첨가 두부의 영양적 품질평가 2. 탄수화물의 품질)

  • Kweon, Mi-Na;Ryu, Hong-Soo;Mun, Sook-Im
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.22 no.3
    • /
    • pp.262-265
    • /
    • 1993
  • Dietary fiber content and carbohydrate digestibility of dried soymilk residue (DSR) and tofu containing DSR were evaluated. Insoluble dietary fiber (IDF) content was 37.4 and 49.8% (%, moisture free basis) for common soymilk residue and DSR, respectively. Both soymilk residues contained 12.5% of soluble dietary fiber (SDF, dry basis). Tofu containing DSR, which is partially substituted with DSR corresponding to 10% weight of soybean used, had higher dietary fiber content (30% more for RDF and 45% more for SDF) than tofu manufactured in traditional manner. Carbohydrate digestibility was much lower in all tofu products ranging from 11% to 21%, and there was a negative correlation( r = -0.9243) between carbohydrate digestibility and total dietary fiber content.

  • PDF

Automatic Recommendation of Nearby Tourist Attractions related to Events (이벤트와 관련된 주변 관광지 자동 추천 알고리즘 개발)

  • Ahn, Jinhyun;Im, Dong-Hyuk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.3
    • /
    • pp.407-413
    • /
    • 2020
  • Participating in exhibitions is one of the major activities for tourists. When selecting their next travel destination after participating in an event, they use map services and social network services, such as blogs, to obtain information about tourist attractions. The map services are location-based recommendations, because they can easily retrieve information regarding nearby places. Blogs contain informative content about tourist attractions, thereby providing content-based recommendations. However, few services consider both location and content. In location-based recommendations, tourist attractions that are not related to the content of the event attended might be recommended. Content-based recommendation has a disadvantage in that events located at a distance might get recommended. We propose an algorithm that considers both location and content, based on information from the Korea Tourism Organization's Linked Open Data (LOD), Wikipedia, and a Korean dictionary. By extracting nouns from the description of a tourist attraction and then comparing them with nouns about other attractions, a content-based relationship is determined. The distance to the event is calculated based on the latitude and longitude of each tourist attraction. A weight selected by the user is used for linear combination with the content-based relationship to determine the preference order of the recommendations.

A study on the possibility that livestock waste to RDF (축산폐기물의 고형연료화 가능성에 관한 연구)

  • Kim, Seong-Jung;Lee, Je-Hak
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.21 no.2
    • /
    • pp.51-55
    • /
    • 2013
  • This research conducted component analysis of pellet fuel using livestock waste and agricultural by-product and combustion characteristics. As the result of analyzing the characteristics of solid fuel using livestock waste, three components, element analysis, and heating value were suitable for the standard of solid fuel. In addition, content of ash consisted of high concentration of K, P, Na indicating the possibile usage as a soil conditioner. However, it was not suitable for solid fuel using only livestock waste due to the relatively low heating value. To improve the heating value and early ignition, we mixed agricultural by-products (i.e., chaff and sawdust) into livestock waste. The mixed material showed significant increase of combustibles and heating value with decrease of moisture content compared to the livestock waste only.

Development of CO2 Emission Factor for Wood Chip Fuel and Reduction Effects (목질계 바이오매스 중 대체연료 우드칩의 온실가스(CO2) 배출계수 개발 및 저감 효과)

  • Lee, Seul-Ki;Kim, Seung-Jin;Cho, Chang-Sang;Jeon, Eui-Chan
    • Journal of Climate Change Research
    • /
    • v.3 no.3
    • /
    • pp.211-224
    • /
    • 2012
  • Technology for energy recovery from waste can reduce the greenhouse gas emissions. So recently, there are several companies using RDF, RPF, WCF instead of using only coal fuel and it's part of the fuel on the increase. In this study, we developed Wood chip fuel $CO_2$ emission factor through fuel analysis. It's moisture content is 23%, received net calorific value is 2,845 kcal/kg, and received basis carbon is 34%. The result of emission factor is $105ton\;CO_2/TJ$, it's 5.9% lower than 2006 IPCC guideline default factor $112ton\;CO_2/TJ$. The gross GHG(Greenhouse gases) emissions of plant A is $178,767ton\;CO_2 eq./yr$, and Net GHG emissions is $40,359ton\;CO_2 eq./yr$. Therefore, the reduction of GHG emissions is $138,408ton\;CO_2/yr$ through using WCF, and I accounts for 77% of all GHG emissions.

Content based data search using semantic annotation (시맨틱 주석을 이용한 내용 기반 데이터 검색)

  • Kim, Byung-Gon;Oh, Sung-Kyun
    • Journal of Digital Contents Society
    • /
    • v.12 no.4
    • /
    • pp.429-436
    • /
    • 2011
  • Various documents, images, videos and other materials on the web has been increasing rapidly. Efficient search of those things has become an important topic. From keyword-based search, internet search has been transformed to semantic search which finds the implications and the relations between data elements. Many annotation processing systems manipulating the metadata for semantic search have been proposed. However, annotation data generated by different methods and forms are difficult to process integrated search between those systems. In this study, in order to resolve this problem, we categorized levels of many annotation documents, and we proposed the method to measure the similarity between the annotation documents. Similarity measure between annotation documents can be used for searching similar or related documents, images, and videos regardless of the forms of the source data.

Cataloging Trends after LRM and its Acceptance in KORMARC Bibliographic Format (LRM 이후 목록 동향과 KORMARC 통합서지용에서의 수용 방안)

  • Lee, Mihwa;Lee, Eun-Ju;Rho, Jee-Hyun
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.33 no.1
    • /
    • pp.25-45
    • /
    • 2022
  • This study was to develop KORMARC-bibliographic format reflecting cataloging trends after LRM using literature review, analysis of MARC 21 discussion papers, and comparison of the fields in MARC 21 and KORMARC. The acceptance and consideration of fields and sub-fields that need to be revised in KORMARC are as follows. First, in terms of LRM / RDA, fields 381 or 387 for the representative expression, field 881 and the change and addition of its sub-fields for the manifestation statement, and data provenance code to ▾7 sub-field for date provenance may be considered. Second, in terms of Linked Data, ▾1 sub-field for RWO, and field 758 for related work identifier can be added. Third, for the data exchange of KORMARC and BIBFRAME, it should be developed in consideration of mapping with BIBFRAME classes and attributes in KORMARC. Fourth, additional fields such as 251 version information, 334 mode of issuance, 335 expansion plan, 341 accessibility content, 348 format of notated music, 353 supplementary content characteristics, 532 accessibility note, 370 associated place, 385 audience characteristics, 386 creator/contributor characteristics, 388 time period of creation, 688 subject added entry-type of entity unspecified, 884 description conversion information, 885 matching information could be developed. This study will be used to revise KORMARC-bibliographic format and to build and utilize bibliographic data in domestic libraries.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.