• Title/Summary/Keyword: semantic resources

Search Result 204, Processing Time 0.025 seconds

A Study on Library 3.0 Concept and its Service Model (도서관 3.0의 개념과 서비스 모형에 관한 연구)

  • Noh, Young-Hee
    • Journal of the Korean Society for information Management
    • /
    • v.27 no.4
    • /
    • pp.283-307
    • /
    • 2010
  • Recently the concept of Library 3.0 and its substance have been discussed by scholars and specialists along with Web 3.0. This study aims to analyze the debates on Library 3.0 and review the concept of Library 3.0. In addition, this study proposes library 3.0 service model based on its analysis. The keywords of in the proposed Library 3.0 model in this study is the Social Semantic Digital Library(SSDL), the Linked Library, and the Mobile Library. First, the SSDL means a real knowledge sharing and cooperation by applying both semantic web technology that which can manage data by machines and social networking services into e-libraries. Second, the Linked Library indicates that library resources become linked data that link libraries in all over the world. Finally, the Mobile Library refers to ubiquitous library equipped with RFID and mobile technology.

Linked Legal Data Construction and Connection of LOD Cloud

  • Jo, Dae Woong;Kim, Myung Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.5
    • /
    • pp.11-18
    • /
    • 2016
  • Linked Data is a web standard data definition method devised to connect, expand resources with a standardized type. Linked Data built in various areas expands existing knowledge through an open data cloud like LOD(Linked Open Data). A project to link and service existing knowledge through LOD is under way worldwide. However, LOD project in domestic is being participated in a specific field to the level of research. In this paper, we suggests a method to build the area of technical knowledge like legislations in type of Linked Data, and distribute such Linked Data built to LOD. The construction method suggested by this paper divides knowledge of legislations in structural, semantic, and integrated perspective, and builds each of them by converting to Linked Data according to the perspective. Also, such built Linked Legal Data prepares to link knowledge in a standardized type by distributing them onto LOD. Built Linked Legal Data are equipped with schema for link service in various types, and give help increase understand the access type to existing legal information.

Learning Probabilistic Kernel from Latent Dirichlet Allocation

  • Lv, Qi;Pang, Lin;Li, Xiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2527-2545
    • /
    • 2016
  • Measuring the similarity of given samples is a key problem of recognition, clustering, retrieval and related applications. A number of works, e.g. kernel method and metric learning, have been contributed to this problem. The challenge of similarity learning is to find a similarity robust to intra-class variance and simultaneously selective to inter-class characteristic. We observed that, the similarity measure can be improved if the data distribution and hidden semantic information are exploited in a more sophisticated way. In this paper, we propose a similarity learning approach for retrieval and recognition. The approach, termed as LDA-FEK, derives free energy kernel (FEK) from Latent Dirichlet Allocation (LDA). First, it trains LDA and constructs kernel using the parameters and variables of the trained model. Then, the unknown kernel parameters are learned by a discriminative learning approach. The main contributions of the proposed method are twofold: (1) the method is computationally efficient and scalable since the parameters in kernel are determined in a staged way; (2) the method exploits data distribution and semantic level hidden information by means of LDA. To evaluate the performance of LDA-FEK, we apply it for image retrieval over two data sets and for text categorization on four popular data sets. The results show the competitive performance of our method.

Clustering of MPEG-7 Data for Efficient Management (MPEG-7 데이터의 효율적인 관리를 위한 클러스터링 방법)

  • Ahn, Byeong-Tae;Kang, Byeong-Shoo;Diao, Jianhua;Kang, Hyun-Syug
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.1
    • /
    • pp.1-12
    • /
    • 2007
  • To use multimedia data in restricted resources of mobile environment, any management method of MPEG-7 documents is needed. At this time, some XML clustering methods can be used. But, to improve the performance efficiency better, a new clustering method which uses the characteristics of MPEG-7 documents is needed. A new clustering improved query processing speed at multimedia search and it possible document storage about various application suitably. In this paper, we suggest a new clustering method of MPEG-7 documents for effective management in multimedia data of large capacity, which uses some semantic relationships among elements of MPEG-7 documents. And also we compared it to the existed clustering methods.

  • PDF

Learning Similarity with Probabilistic Latent Semantic Analysis for Image Retrieval

  • Li, Xiong;Lv, Qi;Huang, Wenting
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1424-1440
    • /
    • 2015
  • It is a challenging problem to search the intended images from a large number of candidates. Content based image retrieval (CBIR) is the most promising way to tackle this problem, where the most important topic is to measure the similarity of images so as to cover the variance of shape, color, pose, illumination etc. While previous works made significant progresses, their adaption ability to dataset is not fully explored. In this paper, we propose a similarity learning method on the basis of probabilistic generative model, i.e., probabilistic latent semantic analysis (PLSA). It first derives Fisher kernel, a function over the parameters and variables, based on PLSA. Then, the parameters are determined through simultaneously maximizing the log likelihood function of PLSA and the retrieval performance over the training dataset. The main advantages of this work are twofold: (1) deriving similarity measure based on PLSA which fully exploits the data distribution and Bayes inference; (2) learning model parameters by maximizing the fitting of model to data and the retrieval performance simultaneously. The proposed method (PLSA-FK) is empirically evaluated over three datasets, and the results exhibit promising performance.

An Ontology Population Model based on ISO/IEC 11179 (ISO/IEC 11179 기반의 온톨로지 확장 모델)

  • Jeong, Hye-Jin;Baik, Doo-Kwon;Jeong, Dong-Won
    • Journal of KIISE:Databases
    • /
    • v.36 no.5
    • /
    • pp.386-398
    • /
    • 2009
  • This paper proposes an ontology population model based on ISO/IEC 11179. Much research has recently been done on harmonizing Web 2.0 and the Semantic Web, and the harmonization is defined as Web 3.0. The most important issues for realizing Web 3.0 include defining ontology schemas and populating instances for ontologies. To resolve the issue, Web ontology schemas should be precisely defined, and a method for populating Web ontology from Web resources should be developed. This paper proposes a Web ontology population model based on ISO/IEC 11179 - Metadata Registry (MDR), which is the international standard, developed to manage and use common standard concepts.

Alignment of Hypernym-Hyponym Noun Pairs between Korean and English, Based on the EuroWordNet Approach (유로워드넷 방식에 기반한 한국어와 영어의 명사 상하위어 정렬)

  • Kim, Dong-Sung
    • Language and Information
    • /
    • v.12 no.1
    • /
    • pp.27-65
    • /
    • 2008
  • This paper presents a set of methodologies for aligning hypernym-hyponym noun pairs between Korean and English, based on the EuroWordNet approach. Following the methods conducted in EuroWordNet, our approach makes extensive use of WordNet in four steps of the building process: 1) Monolingual dictionaries have been used to extract proper hypernym-hyponym noun pairs, 2) bilingual dictionary has converted the extracted pairs, 3) Word Net has been used as a backbone of alignment criteria, and 4) WordNet has been used to select the most similar pair among the candidates. The importance of this study lies not only on enriching semantic links between two languages, but also on integrating lexical resources based on a language specific and dependent structure. Our approaches are aimed at building an accurate and detailed lexical resource with proper measures rather than at fast development of generic one using NLP technique.

  • PDF

Data Exchange between Cadastre and Physical Planning by Database Coupling

  • Kim, Kam-Rae;Choi, Won-Jun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.1
    • /
    • pp.69-75
    • /
    • 2007
  • The information in physical planning field shows the socio-economic potentials of land resources while cadastral data does the physical and legal realities of the land. The two domains commonly deal with land information but have different views. Cadastre has to evolved to the multi-purpose ones which provide value-added information and support a wide spectrum of decision makers by mixing their own information with other spatial/non-spatial databases. In this context, the demands of data exchange between the two domains is growing up but this cannot be done without resolving the heterogeneity between the two information applications. Both of either discipline sees the reality within its own scope, which means each has a unique way to abstract real world phenomena to the database. The heterogeneity problem emerges when an GIS is autonomously and independently established. It causes considerable communication difficulties since heterogeneity of representations forms unique data semantics for each database. The semantic heterogeneity obviously creates an obstacle to data exchange but, at the same time, it can be a key to solve the problems too. Therefore, the study focuses on facilitating data sharing between the fields of cadastre and physical planning by resolving the semantic heterogeneity. The core job is developing a conversion mechanism of cadastral data into the information for the physical planning by DB coupling techniques.

Equivalence Checking for Statechart Specification (Statechart 명세의 등가 관계 검사)

  • Park, Myung-Hwan;Bang, Ki-Seok;Choi, Jin-Young;Lee, Jeong-A;Han, Sang-Yoong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.6
    • /
    • pp.608-619
    • /
    • 2000
  • In this paper, we give a formal semantics for Statechart via a translation into Algebra of Communicating Shared Hesources(ACSR). Statechart is a very rich graphical specification language, which is suitable to specify complicated reactive systems. However, the incorporation of graph into specification and rich syntax makes Statechart semantics very complicated and ambiguous. Thus, it is very difficult to verify the correctness of Statechart specifications. Also, we propose the formal verification method for Statechart specifications by showing equivalence relation between two Statechart specifications. This makes it possible to combine the advantages of a graphical language with the rigor of process algebra.

  • PDF

Faceted Framework for Metadata Interoperability (메타데이터 상호운용성 확보를 위한 패싯 프레임워크 구축)

  • Lee, Seung-Min
    • Journal of the Korean Society for information Management
    • /
    • v.27 no.2
    • /
    • pp.75-94
    • /
    • 2010
  • In the current information environment, metadata interoperability has become the predominant way of organizing and managing resources. However, current approaches to metadata interoperability focus on the superficial mapping between labels of metadata elements without considering semantics of each element. This research applied facet analysis to address these difficulties in achieving metadata interoperability. By categorizing metadata elements according to these semantic and functional similarities, this research identified different types of facets: basic, conceptual, and relational. Through these different facets, a faceted framework was constructed to mediate semantic, syntactical, and structural differences across heterogeneous metadata standards.