• Title/Summary/Keyword: Candidate Clustering

Search Result 83, Processing Time 0.021 seconds

Agglomeration Economies and Intra-metropolitan Location of Firms: A Spatial Analysis on Chicago and Seoul (집적경제와 도시내 기업입지에 대한 공간분선: 서울과 시카고를 대상으로)

  • Jungyul Sohn
    • Journal of the Korean Geographical Society
    • /
    • v.36 no.5
    • /
    • pp.561-577
    • /
    • 2001
  • Urban spatial structure is closely related to the spatial distribution of urban economic activities. The spatial distribution pattern is no more than an aggregated expression of the location and/or relocation behavior of individual firms and establishments. In this respect, it is important to identify and examine the factors that affect the spatial behavior of individual firms for a more comprehensive understanding of urban space. Agglomeration economies are one of the most prominent urban economic phenomena in the modern metropolitan area. Most firms in an urban space seek external economies through the spatial clustering of their activities. Agglomeration economies feature prominently in the analysis of urban economic structure across urban areas. While the agglomeration economies between cities focus at the macro-scale of analysis, such economies within any given city focus more on the micro geographical scale. There have been a number of researches on agglomeration economies, among which there are relatively few approaches based on an intra-urban context. This proper explores the agglomeration economies at the micro scale and tries to reseal the spatial realization of the agglomeration economies within and between sectors. Three sectors are considered in the analysis; manufacturing, retail and service. The model is based on simultaneous equation systems combined with spatially weighted variables and estimated by the KRP estimators.

  • PDF

Fire Detection Approach using Robust Moving-Region Detection and Effective Texture Features of Fire (강인한 움직임 영역 검출과 화재의 효과적인 텍스처 특징을 이용한 화재 감지 방법)

  • Nguyen, Truc Kim Thi;Kang, Myeongsu;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.6
    • /
    • pp.21-28
    • /
    • 2013
  • This paper proposes an effective fire detection approach that includes the following multiple heterogeneous algorithms: moving region detection using grey level histograms, color segmentation using fuzzy c-means clustering (FCM), feature extraction using a grey level co-occurrence matrix (GLCM), and fire classification using support vector machine (SVM). The proposed approach determines the optimal threshold values based on grey level histograms in order to detect moving regions, and then performs color segmentation in the CIE LAB color space by applying the FCM. These steps help to specify candidate regions of fire. We then extract features of fire using the GLCM and these features are used as inputs of SVM to classify fire or non-fire. We evaluate the proposed approach by comparing it with two state-of-the-art fire detection algorithms in terms of the fire detection rate (or percentages of true positive, PTP) and the false fire detection rate (or percentages of true negative, PTN). Experimental results indicated that the proposed approach outperformed conventional fire detection algorithms by yielding 97.94% for PTP and 4.63% for PTN, respectively.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.