• Title/Summary/Keyword: Semantic Importance

Search Result 138, Processing Time 0.02 seconds

RDF 지식 베이스의 자원 중요도 계산 알고리즘에 대한 연구

  • No, Sang-Gyu;Park, Hyeon-Jeong;Park, Jin-Su
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.05a
    • /
    • pp.123-137
    • /
    • 2007
  • The information space of semantic web comprised of various resources, properties, and relationships is more complex than that of WWW comprised of just documents and hyperlinks. Therefore, ranking methods in the semantic web should be modified to reflect the complexity of the information space. In this paper we propose a method of ranking query results from RDF(Resource Description Framework) knowledge bases. The ranking criterion is the importance of a resource computed based on the link structure of the RDF graph. Our method is expected to solve a few problems in the prior research including the Tightly-Knit Community Effect. We illustrate our methods using examples and discuss directions for future research.

  • PDF

Multi-cue Integration for Automatic Annotation (자동 주석을 위한 멀티 큐 통합)

  • Shin, Seong-Yoon;Rhee, Yang-Won
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.151-152
    • /
    • 2010
  • WWW images locate in structural, networking documents, so the importance of a word can be indicated by its location, frequency. There are two patterns for multi-cues ingegration annotation. The multi-cues integration algorithm shows initial promise as an indicator of semantic keyphrases of the web images. The latent semantic automatic keyphrase extraction that causes the improvement with the usage of multi-cues is expected to be preferable.

  • PDF

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

A Digital Image Watermarking Using Region Segmentation

  • Park, Min-Chul;Han, Suk-Ki
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1260-1263
    • /
    • 2002
  • This paper takes the region segmentation in image processing and the semantic importance in an image analysis into consideration for digital image watermarking. A semantic importance for an object region, which is segmented by specific features, is determined according to the contents of the region. In this paper, face images are the targets of watermarking for their increasing importance, the use of frequency and strong necessity of protection. A face region is detected and segmented as an object region and encoded watermark information is embedded into the region. Employing a masking and filtering method, experiments are carried out and the results show the usefulness of the proposed method even when there are high compression and a synthesis as a case of copyright infringement.

  • PDF

Tagged Web Image Retrieval Re-ranking with Wikipedia-based Semantic Relatedness (위키피디아 기반의 의미 연관성을 이용한 태깅된 웹 이미지의 검색순위 조정)

  • Lee, Seong-Jae;Cho, Soo-Sun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.11
    • /
    • pp.1491-1499
    • /
    • 2011
  • Now a days, to make good use of tags is a general tendency when users need to upload or search some multimedia data such as images and videos on the Web. In this paper, we introduce an approach to calculate semantic importance of tags and to make re-ranking with them on tagged Web image retrieval. Generally, most photo images stored on the Web have lots of tags added with user's subjective judgements not by the importance of them. So they become the cause of precision rate decrease with simple matching of tags to a given query. Therefore, if we can select semantically important tags and employ them on the image search, the retrieval result would be enhanced. In this paper, we propose a method to make image retrieval re-ranking with the key tags which share more semantic information with a query or other tags based on Wikipedia-based semantic relatedness. With the semantic relatedness calculated by using huge on-line encyclopedia, Wikipedia, we found the superiority of our method in precision and recall rate as experimental results.

Error Concealment Based on Semantic Prioritization with Hardware-Based Face Tracking

  • Lee, Jae-Beom;Park, Ju-Hyun;Lee, Hyuk-Jae;Lee, Woo-Chan
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.535-544
    • /
    • 2004
  • With video compression standards such as MPEG-4, a transmission error happens in a video-packet basis, rather than in a macroblock basis. In this context, we propose a semantic error prioritization method that determines the size of a video packet based on the importance of its contents. A video packet length is made to be short for an important area such as a facial area in order to reduce the possibility of error accumulation. To facilitate the semantic error prioritization, an efficient hardware algorithm for face tracking is proposed. The increase of hardware complexity is minimal because a motion estimation engine is efficiently re-used for face tracking. Experimental results demonstrate that the facial area is well protected with the proposed scheme.

  • PDF

SWCL Extension for Knowledge Representation of Piecewise linear Constraints on the Semantic Web (시맨틱 웹 환경에서의 부분선형 제약지식표현을 위한 SWCL의 확장)

  • Lee, Myungjin;Kim, Wooju;Kim, Hak-Jin
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.37 no.4
    • /
    • pp.19-35
    • /
    • 2012
  • The Semantic Web technology, purporting to share, to reuse and to process by machines data stored in the Web environment, incessantly evolves to help human decision making; in particular, decision making based on data, or quantitative decision making. This trend drives researchers to fill the gap with strenuous efforts between the current state of the technology and the terminus of this evolution. The Semantic Web Constraint Language (SWCL) together with SWRL is one of these endeavors to achieve the goal. This paper focuses particularly on how to express the piecewise linear form in the context of SWCL. The importance of this ingredient can be fortified by the fact that any nonlinear expression can be approximated in the piecewise linear form. This paper will also provide the information of how it will work in the decision making process through an example of the Internet shopping mall problem.

Image Semantic Segmentation Using Improved ENet Network

  • Dong, Chaoxian
    • Journal of Information Processing Systems
    • /
    • v.17 no.5
    • /
    • pp.892-904
    • /
    • 2021
  • An image semantic segmentation model is proposed based on improved ENet network in order to achieve the low accuracy of image semantic segmentation in complex environment. Firstly, this paper performs pruning and convolution optimization operations on the ENet network. That is, the network structure is reasonably adjusted for better results in image segmentation by reducing the convolution operation in the decoder and proposing the bottleneck convolution structure. Squeeze-and-excitation (SE) module is then integrated into the optimized ENet network. Small-scale targets see improvement in segmentation accuracy via automatic learning of the importance of each feature channel. Finally, the experiment was verified on the public dataset. This method outperforms the existing comparison methods in mean pixel accuracy (MPA) and mean intersection over union (MIOU) values. And in a short running time, the accuracy of the segmentation and the efficiency of the operation are guaranteed.

Linear Precedence in Morphosyntactic and Semantic Processes in Korean Sentential Processing as Revealed by Event-related Potential

  • Kim, Choong-Myung
    • International Journal of Contents
    • /
    • v.10 no.4
    • /
    • pp.30-37
    • /
    • 2014
  • The current study was conducted to examine the temporal and spatial activation sequences related to morphosyntactic, semantic and orthographic-lexical sentences, focusing on the morphological-orthographic and lexical-semantic deviation processes in Korean language processing. The Event-related Potentials (ERPs) of 15 healthy students were adopted to explore the processing of head-final critical words in a sentential plausibility task. Specifically, it was examined whether the ERP-pattern to orthographic-lexical violation might show linear precedence over other processes, or the presence of additivity across combined processing components. For the morphosyntactic violation, fronto-central LAN followed by P600 was found, while semantic violation elicited N400, as expected. Activation of P600 was distributed in the left frontal and central sites, while N400 appeared even in frontal sites other than the centro-parietal areas. Most importantly, the orthographic-lexical violation process revealed by earlier N2 with fronto-central activity was shown to be complexes of morphological and semantic functions from the same critical word. The present study suggests that there is a linear precedence over the morphological deviation and its lexical semantic processing based on the immediate possibility of lexical information, followed by sentential semantics. Finally, late syntactic integration processes were completed, showing different topographic activation in order of importance of ongoing sentential information.

Applying the Schema Matching Method to XML Semantic Model of Steelbox-bridge's Structural Calculation Reports (강박스교 구조계산서 XML 시맨틱 모델의 스키마 매칭 기법 적용)

  • Yang Yeong-Ae;Kim Bong-Geun;Lee Sang-Ho
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2005.04a
    • /
    • pp.680-687
    • /
    • 2005
  • This study presents a schema matching technique which can be applied to XML semantic model of structural calculation reports of steel-box bridges. The semantic model of structural calculation documents was developed by extracting the optimized common elements from the analyses of various existing structural calculation documents, and the standardized semantic model was schematized by using XML Schema. In addition, the similarity measure technique and the relaxation labeling technique were employed to develop the schema matching algorithm. The former takes into account the element categories and their features, and the latter considers the structural constraints in the semantic model. The standardized XML semantic model of steel-box bridge's structural calculation documents called target schema was compared with existing nonstandardized structural calculation documents called primitive schema by the developed schema matching algorithm Some application examples show the importance of the development of standardized target schema for structural calculation documents and the effectiveness and efficiency of schema matching technique in the examination of the degree of document standardization in structural calculation reports.

  • PDF