• Title/Summary/Keyword: information mapping

Search Result 3,148, Processing Time 0.034 seconds

k-Nearest Neighbor Querv Processing using Approximate Indexing in Road Network Databases (도로 네트워크 데이타베이스에서 근사 색인을 이용한 k-최근접 질의 처리)

  • Lee, Sang-Chul;Kim, Sang-Wook
    • Journal of KIISE:Databases
    • /
    • v.35 no.5
    • /
    • pp.447-458
    • /
    • 2008
  • In this paper, we address an efficient processing scheme for k-nearest neighbor queries to retrieve k static objects in road network databases. Existing methods cannot expect a query processing speed-up by index structures in road network databases, since it is impossible to build an index by the network distance, which cannot meet the triangular inequality requirement, essential for index creation, but only possible in a totally ordered set. Thus, these previous methods suffer from a serious performance degradation in query processing. Another method using pre-computed network distances also suffers from a serious storage overhead to maintain a huge amount of pre-computed network distances. To solve these performance and storage problems at the same time, this paper proposes a novel approach that creates an index for moving objects by approximating their network distances and efficiently processes k-nearest neighbor queries by means of the approximate index. For this approach, we proposed a systematic way of mapping each moving object on a road network into the corresponding absolute position in the m-dimensional space. To meet the triangular inequality this paper proposes a new notion of average network distance, and uses FastMap to map moving objects to their corresponding points in the m-dimensional space. After then, we present an approximate indexing algorithm to build an R*-tree, a multidimensional index, on the m-dimensional points of moving objects. The proposed scheme presents a query processing algorithm capable of efficiently evaluating k-nearest neighbor queries by finding k-nearest points (i.e., k-nearest moving objects) from the m-dimensional index. Finally, a variety of extensive experiments verifies the performance enhancement of the proposed approach by performing especially for the real-life road network databases.

An Experimental Study on the Fashion Merchandising System-With special reference to the life-style of consumers and the Marketing strategy of the fashion industry- (패션 머천다이징 시스템 개발에 관한 실증적 연구 - 라이프스타일과 패션 의 마케팅 전략을 중심으로-)

  • 이호정
    • Journal of the Korean Society of Costume
    • /
    • v.20
    • /
    • pp.151-167
    • /
    • 1993
  • The purpose of this study is to systematize the theory of the Fashion Marketing and merchandi-sing system as well as the strategy for the Mar-keting based on the related variable. Furthermore, this study deals with development of the mark-eting strategy to the relation between consumers and industry. The content conclusion on the research can be outlined as follows : 1. In order to inverstigate how the life-style of consumers affects their sense of fashion, awa-reness of brand, and decision making process of purchase, the life-style of women consumers is classified into 15 types. (1) Acording to the different life-style types, and important difference is found in the consum-ers' sense of clothes, a unique image of outfit and its own favorite image of womanliness. (2) The consumer's awareness of a particular brand has a reasonable relationship with their brand preference and possession of the brands. (3) Their is an important discrimination acco-rding to the life-style types in their brand awar-eness and preference and possesion of brands. (4) The consumers of each life-style type show noticeable difference in the decision making pro-cess of purchase including he motive of purchase, the source of information, the cause of purchase intention, price, the frequency of purchase and the degree of satisfaction of purchased goods. 2. The merchandising system and the market positioning among the fashion industry are compared and analyzed in the following terms ; (1-1) For the purpose of establishing the target market strategy, the industry uses unreasenalbe methods to analyze the life-style of the target customers and the real customers(36%) and the aging phenomenon of brands is remarkable : as much as 37% of brands show over 5 years-old age gap. (1-2) The price setting process depends highly on the cost-plus approach. (1-3) In color planning, too many colors are used in every season(the average number is 22.3) and the investigation of the consumers' favorite color is neglected. (1-4) The manufacturers of successful brands are much likely to employ the textile designer and allow them to develop the various fabrication. (1-5) The regular rate of sales in each season is extremely low(56.04%) : the rate of the succ-essful brands is relatively high at 65%, but that of the unsuccessful as low as 51%. (1-6) 47% of brands reveal the designer-orie-nted fashion merchandising system. The successful brands, on the other hand, show a high rate of merchandiser oriented system. (2) Since the brand positioning is highly cen-tered on each brand image, styles and target age, the new data are presented in this study for the new market development. (3) To set up the target market, the mapping of images between the differentiated market and the consumers is suggersted according to the market positioning of industry and 15 types of the life-styles of consumers.

  • PDF

Skeleton Code Generation for Transforming an XML Document with DTD using Metadata Interface (메타데이터 인터페이스를 이용한 DTD 기반 XML 문서 변환기의 골격 원시 코드 생성)

  • Choe Gui-Ja;Nam Young-Kwang
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.549-556
    • /
    • 2006
  • In this paper, we propose a system for generating skeleton programs for directly transforming an XML document to another document, whose structure is defined in the target DTD with GUI environment. With the generated code, the users can easily update or insert their own codes into the program so that they can convert the document as the way that they want and can be connected with other classes or library files. Since most of the currently available code generation systems or methods for transforming XML documents use XSLT or XQuery, it is very difficult or impossible for users to manipulate the source code for further updates or refinements. As the generated code in this paper reveals the code along the XPaths of the target DTD, the result code is quite readable. The code generating procedure is simple; once the user maps the related elements represented as trees in the GUI interface, the source document is transformed into the target document and its corresponding Java source program is generated, where DTD is given or extracted from XML documents automatically by parsing it. The mapping is classified 1:1, 1:N, and N:1, according to the structure and semantics of elements of the DTD. The functions for changing the structure of elements designated by the user are amalgamated into the metadata interface. A real world example of transforming articles written in XML file into a bibliographical XML document is shown with the transformed result and its code.

Study Service Ontology Design Scheme Using UML and OCL (UML 및 OCL을 이용한 서비스 온톨로지 설계 방안에 관한 연구)

  • Lee Yun-Su;Chung In-Jeoung
    • The KIPS Transactions:PartD
    • /
    • v.12D no.4 s.100
    • /
    • pp.627-636
    • /
    • 2005
  • The Intelligent Web Service is proposed for the purpose of automatic discovery, invocation, composition, inter-operation, execution monitoring and recovery web service through the Semantic Web and the Agent technology. To accomplish this Intelligent Web Service, the Ontology is a necessity for reasoning and processing the knowledge by the computer. However, creating service ontology, for the intelligent web service, has two problems not only consuming a lot of time and cost depended on heuristic of service developer, but also being hard to be mapping completely between service and service ontology. Moreover, the markup language to describe the service ontology is currently hard to be learned by the service developer In a short time. This paper proposes the efficient way of designing and creating the service ontology using MDA methodology. This proposed solution reuses the creating model in terms of desiEninE and constructing Web Service Model using UML based on MDA. After converting the Platform-Independent Web Service Model to the dependent model of OWL-S which is a Service Ontology description language, it converts to OWL-S Service Ontology using XMI. This proposed solution has profits, oneis able to be easily constructed the Service Ontology by Service Developers, the other is enable to be created the both service and Service Ontology from one model. Moreover, it can be effective to reduce the time and cost as creating Service Ontology automatically from a model, and calmly dealt with a change of outer environment like as the platform change. This paper cites an instance for the validity of designing Web Service model and creating the Service Ontology, and validates whether the created Service Ontology is valid or not.

Automatic Merging of Distributed Topic Maps based on T-MERGE Operator (T-MERGE 연산자에 기반한 분산 토픽맵의 자동 통합)

  • Kim Jung-Min;Shin Hyo-Pil;Kim Hyoung-Joo
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.787-801
    • /
    • 2006
  • Ontology merging describes the process of integrating two ontologies into a new ontology. How this is done best is a subject of ongoing research in the Semantic Web, Data Integration, Knowledge Management System, and other ontology-related application systems. Earlier research on ontology merging, however, has studied for developing effective ontology matching approaches but missed analyzing and solving methods of problems of merging two ontologies given correspondences between them. In this paper, we propose a specific ontology merging process and a generic operator, T-MERGE, for integrating two source ontologies into a new ontology. Also, we define a taxonomy of merging conflicts which is derived from differing representations between input ontologies and a method for detecting and resolving them. Our T-MERGE operator encapsulates the process of detection and resolution of conflicts and merging two entities based on given correspondences between them. We define a data structure, MergeLog, for logging the execution of T-MERGE operator. MergeLog is used to inform detailed results of execution of merging to users or recover errors. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Naver philosophy dictionary as input ontologies. Our experiments show that the automatic merging module compared with manual merging by a expert has advantages in terms of time and effort.

Buffer Cache Management for Low Power Consumption (저전력을 위한 버퍼 캐쉬 관리 기법)

  • Lee, Min;Seo, Eui-Seong;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.293-303
    • /
    • 2008
  • As the computing environment moves to the wireless and handheld system, the power efficiency is getting more important. That is the case especially in the embedded hand-held system and the power consumed by the memory system takes the second largest portion in overall. To save energy consumed in the memory system we can utilize low power mode of SDRAM. In the case of RDRAM, nap mode consumes less than 5% of the power consumed in active or standby mode. However hardware controller itself can't use this facility efficiently unless the operating system cooperates. In this paper we focus on how to minimize the number of active units of SDRAM. The operating system allocates its physical pages so that only a few units of SDRAM need to be activated and the unnecessary SDRAM can be put into nap mode. This work can be considered as a generalized and system-wide version of PAVM(Power-Aware Virtual Memory) research. We take all the physical memory into account, especially buffer cache, which takes an half of total memory usage on average. Because of the portion of buffer cache and its importance, PAVM approach cannot be robust without taking the buffer cache into account. In this paper, we analyze the RAM usage and propose power-aware page allocation policy. Especially the pages mapped into the process' address space and the buffer cache pages are considered. The relationship and interactions of these two kinds of pages are analyzed and exploited for energy saving.

Development Life Cycle-Based Association Analysis of Requirements for Risk Management of Medical Device Software (의료기기 소프트웨어 위험관리를 위한 개발생명주기 기반 위험관리 요구사항 연관성 분석)

  • Kim, DongYeop;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.12
    • /
    • pp.543-548
    • /
    • 2017
  • In recent years, the importance of the safety of medical device software has been emphasized because of the function and role of the software among components of the medical device, and because the operation of the medical device software is directly related to the life and safety of the user. To this end, various standards have been set up that provide activities that can effectively ensure the safety of medical devices and provide their respective requirements. The activities that standards provide to ensure the safety of medical device software are largely divided into the development life cycle of medical device software and the risk management process. These two activities should be concurrent with the development process, but there is a limitation that the risk management requirements to be performed at each stage of the medical device software development life cycle are not classified. As a result, developers must analyze the association of standards directly to develop risk management activities during the development of medical devices. Therefore, in this paper, we analyze the relationship between medical device software development life cycle and risk management process, and extract risk management requirement items. It enables efficient and systematic risk management during the development of medical device software by mapping the extracted risk management requirement items to the development life cycle based on the analyzed associations.

On Mapping Growing Degree-Days (GDD) from Monthly Digital Climatic Surfaces for South Korea (월별 전자기후도를 이용한 생장도일 분포도 제작에 관하여)

  • Kim, Jin-Hee;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2008
  • The concept of growing degree-days (GDD) is widely accepted as a tool to relate plant growth, development, and maturity to temperature. Information on GDD can be used to predict the yield and quality of several crops, flowering date of fruit trees, and insect activity related to agriculture and forestry. When GDD is expressed on a spatial basis, it helps identify the limits of geographical areas suitable for production of various crops and to evaluate areas agriculturally suitable for new or nonnative plants. The national digital climate maps (NDCM, the fine resolution, gridded climate data for climatological normal years) are not provided on a daily basis but on a monthly basis, prohibiting GDD calculation. We applied a widely used GDD estimation method based on monthly data to a part of the NDCM (for Hapcheon County) to produce the spatial GDD data for each month with three different base temperatures (0, 5, and $10^{\circ}C$). Synthetically generated daily temperatures from the NCDM were used to calculate GDD over the same area and the deviations were calculated for each month. The monthly-data based GDD was close to the reference GDD using daily data only for the case of base temperature $0^{\circ}C$. There was a consistent overestimation in GDD with other base temperatures. Hence, we estimated spatial GDD with base temperature $0^{\circ}C$ over the entire nation for the current (1971-2000, observed) and three future (2011-2040, 2041-2070, and 2071-2100, predicted) climatological normal years. Our estimation indicates that the annual GDD in Korea may increase by 38% in 2071-2100 compared with that in 1971-2000.

Landslide Susceptibility Mapping and Verification Using the GIS and Bayesian Probability Model in Boun (지리정보시스템(GIS) 및 베이지안 확률 기법을 이용한 보은지역의 산사태 취약성도 작성 및 검증)

  • Choi, Jae-Won;Lee, Sa-Ro;Min, Kyung-Duk;Woo, Ik
    • Economic and Environmental Geology
    • /
    • v.37 no.2
    • /
    • pp.207-223
    • /
    • 2004
  • The purpose of this study is to reveal spatial relationships between landslide and geospatial data set, to map the landslide susceptibility using the relationship and to verify the landslide susceptibility using the landslide occurrence data in Boun area in 1998. Landslide locations were detected from aerial photography and field survey, and then topography, soil, forest, and land cover data set were constructed as a spatial database using GIS. Various spatial parameters were used as the landslide occurrence factors. They are slope, aspect, curvature and type of topography, texture, material, drainage and effective thickness of soil. type, age, diameter and density of wood, lithology, distance from lineament and land cover. To calculate the relationship between landslides and geospatial database, Bayesian probability methods, weight of evidence. were applied and the contrast value that is >$W^{+}$->$W^{-}$ were calculated. The landslide susceptibility index was calculated by summation of the contrast value and the landslide susceptibility maps were generated using the index. The landslide susceptibility map can be used to reduce associated hazards, and to plan land cover and construction.

Accuracy of Parcel Boundary Demarcation in Agricultural Area Using UAV-Photogrammetry (무인 항공사진측량에 의한 농경지 필지 경계설정 정확도)

  • Sung, Sang Min;Lee, Jae One
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.53-62
    • /
    • 2016
  • In recent years, UAV Photogrammetry based on an ultra-light UAS(Unmanned Aerial System) installed with a low-cost compact navigation device and a camera has attracted great attention through fast and accurate acquirement of geo-spatial data. In particular, UAV Photogrammetry do gradually replace the traditional aerial photogrammetry because it is able to produce DEMs(Digital Elevation Models) and Orthophotos rapidly owing to large amounts of high resolution image collection by a low-cost camera and image processing software combined with computer vision technique. With these advantages, UAV-Photogrammetry has therefore been applying to a large scale mapping and cadastral surveying that require accurate position information. This paper presents experimental results of an accuracy performance test with images of 4cm GSD from a fixed wing UAS to demarcate parcel boundaries in agricultural area. Consequently, the accuracy of boundary point extracted from UAS orthoimage has shown less than 8cm compared with that of terrestrial cadastral surveying. This means that UAV images satisfy the tolerance limit of distance error in cadastral surveying for the scale of 1: 500. And also, the area deviation is negligible small, about 0.2%(3.3m2), against true area of 1,969m2 by cadastral surveying. UAV-Photogrammetry is therefore as a promising technology to demarcate parcel boundaries.