• Title/Summary/Keyword: Structure Similarity

Search Result 999, Processing Time 0.026 seconds

Fractal Analysis of Urban Morphology Considering Distributed Situation of Buildings (건물분포를 고려한 도시형태의 프랙털(Fractal) 해석)

  • Moon, Tae-Heon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.3
    • /
    • pp.1-10
    • /
    • 2005
  • The purpose of this paper is to conduct an experimental measurement and analysis of cities' morphology. Fractal theory that is an effective tool for evaluating self-similarity and complexity of objects was applied. For the comparative analysis of fractailities and computational verification, two totally different cities in Japan were selected. They are Kitakyushu City, which is a big and fully developed city, and Jinguu Machi of which almost all the area is covered with agricultural land use. After converting vector data to raster data within GIS, fractal dimensions of two cases in Kitakyushu City and one case in Jinguu Machi were calculated. The calculation showed that two parts of Kitakyushu City were already fractal. Jinguu Machi, however, was difficult to find fractality. As a conclusion, fractal was proved to be an useful tool to estimate the shape of cities reflecting their internal spatial structure, that is self-similarity and complexity.

  • PDF

Image Quality Assessment Considering both Computing Speed and Robustness to Distortions (계산 속도와 왜곡 강인성을 동시 고려한 이미지 품질 평가)

  • Kim, Suk-Won;Hong, Seongwoo;Jin, Jeong-Chan;Kim, Young-Jin
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.992-1004
    • /
    • 2017
  • To assess image quality accurately, an image quality assessment (IQA) metric is required to reflect the human visual system (HVS) properly. In other words, the structure, color, and contrast ratio of the image should be evaluated in consideration of various factors. In addition, as mobile embedded devices such as smartphone become popular, a fast computing speed is important. In this paper, the proposed IQA metric combines color similarity, gradient similarity, and phase similarity synergistically to satisfy the HVS and is designed by using optimized pooling and quantization for fast computation. The proposed IQA metric is compared against existing 13 methods using 4 kinds of evaluation methods. The experimental results show that the proposed IQA metric ranks the first on 3 evaluation methods and the first on the remaining method, next to VSI which is the most remarkable IQA metric. Its computing speed is on average about 20% faster than VSI's. In addition, we find that the proposed IQA metric has a bigger amount of correlation with the HVS than existing IQA metrics.

Enhancing Document Clustering Method using Synonym of Cluster Topic and Similarity (군집 주제의 유의어와 유사도를 이용한 문서군집 향상 방법)

  • Park, Sun;Kim, Kyung-Jun;Lee, Jin-Seok;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.30-38
    • /
    • 2011
  • This paper proposes a new enhancing document clustering method using a synonym of cluster topic and the similarity. The proposed method can well represent the inherent structure of document cluster set by means of selecting terms of cluster topic based on the semantic features by NMF. It can solve the problem of "bags of words" by using of expanding the terms of cluster topics which uses the synonyms of WordNet. Also, it can improve the quality of document clustering which uses the cosine similarity between the expanded cluster topic terms and document set to well cluster document with respect to the appropriation cluster. The experimental results demonstrate that the proposed method achieves better performance than other document clustering methods.

A Dispersion Mean Algorithm based on Similarity Measure for Evaluation of Port Competitiveness (항만 경쟁력 평가를 위한 유사도 기반의 이산형 평균 알고리즘)

  • Chw, Bong-Sung;Lee, Cheol-Yeong
    • Journal of Navigation and Port Research
    • /
    • v.28 no.3
    • /
    • pp.185-191
    • /
    • 2004
  • The mean and Clustering are important methods of data mining, which is now widely applied to various multi-attributes problem However, feature weighting and feature selection are important in those methods bemuse features may differ in importance and such differences need to be considered in data mining with various multiful-attributes problem. In addition, in the event of arithmetic mean, which is inadequate to figure out the most fitted result for structure of evaluation with attributes that there are weighted and ranked. Moreover, it is hard to catch hold of a specific character for assume the form of user's group. In this paper. we propose a dispersion mean algorithm for evaluation of similarity measure based on the geometrical figure. In addition, it is applied to mean classified by user's group. One of the key issues to be considered in evaluation of the similarity measure is how to achieve objectiveness that it is not change over an item ranking in evaluation process.

Efficient Hardware Architecture for Fast Image Similarity Calculation (고속 영상 유사도 분석을 위한 효율적 하드웨어 구조)

  • Kwon, Soon;Lee, Chung-Hee;Lee, Jong-Hun;Moon, Byung-In;Lee, Yong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.4
    • /
    • pp.6-13
    • /
    • 2011
  • Due to its robustness to illumination change, normalized cross-correlation based similarity measurement is widely used in many machine vision applications. However, its inefficient computation structure is not adequate for real-time embedded vision system. In this paper, we present an efficient hardware architecture based on a normalized cross correlation (NCC) for fast image similarity measure. The proposed architecture simplifies window-sum process of the NCC using the integral-image. Relieving the overhead to constructing integral image, we make it possible to process integral image construction at the same time that pixel sequences are inputted. Also the proposed segmented integral image method can reduce the buffer size for storing integral image data.

Structural Alignment: Conceptual Implications and Limitations (구조적 정렬: 개념적 시사점과 한계)

  • Lee Tae-Yeon
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.1
    • /
    • pp.53-74
    • /
    • 2006
  • Similarity has been considered as one of basic concepts of cognitive psychology which is useful for explaining cognitive structure and process. MDS models(Shepard, 1964; Nosofsky, 1991) and Contrast model(Tversky, 1977) were proposed as early models of similarity comparison process. But, there have been a lot of theoretical doubts about the conceptual validity of similarity as a result of empirical findings which could not be explained by early models. Goldstone(1994) assumed that similarity could be defined by alignment processes, and suggested structural alignment as a prospective alternative for solving conceptual controversies so far. In this study, basic assumption and algorithms of MDS models(Shepard, 1944; Nosofsky, 1991) and Contrast model(Tversky, 1977) were described shortly and some theoretical limitations such as arbitrariness of selective attention and correlated structures were discussed as well. The conceptual characteristics and algorithms of SIAM(Goldstone, 1994) were described and how it has been applied to cognitive psychology areas such as categorization, conceptual combination, and analogical reasoning were reviewed. Finally, some theoretical limitations related with data-driven processing and alternative processing and possible directions for structural alignment were discussed.

  • PDF

Implementation of an Efficient Requirements Analysis supporting System using Similarity Measure Techniques (유사도 측정 기법을 이용한 효율적인 요구 분석 지원 시스템의 구현)

  • Kim, Hark-Soo;Ko, Young-Joong;Park, Soo-Yong;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.1
    • /
    • pp.13-23
    • /
    • 2000
  • As software becomes more complicated and large-scaled, user's demands become more varied and his expectation levels about software products are raised. Therefore it is very important that a software engineer analyzes user's requirements precisely and applies it effectively in the development step. This paper presents a requirements analysis system that reduces and revises errors of requirements specifications analysis effectively. As this system measures the similarity among requirements documents and sentences, it assists users in analyzing the dependency among requirements specifications and finding the traceability, redundancy, inconsistency and incompleteness among requirements sentences. It also extracts sentences that contain ambiguous words. Indexing method for the similarity measurement combines sliding window model and dependency structure model. This method can complement each model's weeknesses. This paper verifies the efficiency of similarity measure techniques through experiments and presents a proccess of the requirements specifications analysis using the embodied system.

  • PDF

Computational Approaches for Structural and Functional Genomics

  • Brenner, Steven-E.
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.17-20
    • /
    • 2000
  • Structural genomics aims to provide a good experimental structure or computational model of every tractable protein in a complete genome. Underlying this goal is the immense value of protein structure, especially in permitting recognition of distant evolutionary relationships for proteins whose sequence analysis has failed to find any significant homolog. A considerable fraction of the genes in all sequenced genomes have no known function, and structure determination provides a direct means of revealing homology that may be used to infer their putative molecular function. The solved structures will be similarly useful for elucidating the biochemical or biophysical role of proteins that have been previously ascribed only phenotypic functions. More generally, knowledge of an increasingly complete repertoire of protein structures will aid structure prediction methods, improve understanding of protein structure, and ultimately lend insight into molecular interactions and pathways. We use computational methods to select families whose structures cannot be predicted and which are likely to be amenable to experimental characterization. Methods to be employed included modern sequence analysis and clustering algorithms. A critical component is consultation of the presage database for structural genomics, which records the community's experimental work underway and computational predictions. The protein families are ranked according to several criteria including taxonomic diversity and known functional information. Individual proteins, often homologs from hyperthermophiles, are selected from these families as targets for structure determination. The solved structures are examined for structural similarity to other proteins of known structure. Homologous proteins in sequence databases are computationally modeled, to provide a resource of protein structure models complementing the experimentally solved protein structures.

  • PDF

A Study on the Database Structure for Utilizing Classical Literature Knowledge (고문헌 지식활용을 위한 DB구조에 관한 고찰)

  • Woo, Dong-Hyun;Kim, Ki-Wook;Lee, Byung-Wook
    • The Journal of Korean Medical History
    • /
    • v.33 no.2
    • /
    • pp.89-104
    • /
    • 2020
  • The purpose of this research is to build a database structure that can be useful for evidence-based medical practices by constructing the knowledge related to oriental medicine in the classical literature knowledge in a form that can utilize new forms of information technology. As a method, "database" is used as a keyword to search published studies in the field of oriental medicine, research is conducted on classic literature knowledge, and studies describing the contents of the data structure are found and analyzed. In conclusion, the original text DB for the preservation of the original texts and the presentation of the supporting texts should include 'Contents Text', 'Tree Structure', 'Herbal Structure', 'Medicine Manufacture', and 'Disease Structure' tables. In order to search, calculate, and automatically extract expressions written in the original text of the old literature, the tool DB should include 'Unit List', 'Capacity Notation List', 'CUI', 'LUI', and 'SUI' tables. In addition, In order to manage integrated knowledge such as herbal, medicine, acupuncture, disease, and literature, and to implement a search function such as comparison of similarity of control composition, the knowledge DB must contain 'dose-controlled medicine name', 'dose-controlled medicine composition', 'relational knowledge', 'knowledge structure', and 'computational knowledge' tables.

Extracting Maximal Similar Paths between Two XML Documents using Sequential Pattern Mining (순차 패턴 마이닝을 사용한 두 XML 문서간 최대 유사 경로 추출)

  • 이정원;박승수
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.553-566
    • /
    • 2004
  • Some of the current main research areas involving techniques related to XML consist of storing XML documents, optimizing the query, and indexing. As such we may focus on the set of documents that are composed of various structures, but that are not shared with common structure such as the same DTD or XML Schema. In the case, it is essential to analyze structural similarities and differences among many documents. For example, when the documents from the Web or EDMS (Electronic Document Management System) are required to be merged or classified, it is very important to find the common structure for the process of handling documents. In this paper, we transformed sequential pattern mining algorithms(1) to extract maximal similar paths between two XML documents. Experiments with XML documents show that our transformed sequential pattern mining algorithms can exactly find common structures and maximal similar paths between them. For analyzing experimental results, similarity metrics based on maximal similar paths can exactly classify the types of XML documents.