• Title/Summary/Keyword: 가중치부여 기법

Search Result 263, Processing Time 0.029 seconds

A Study on the Land Suitability Analysis Based on Site Selection Variables using Macro Language (매크로 언어를 이용한 입지인자 변수조정에 따른 토지적합성 분석에 관한 연구)

  • Yi, Gi-Chul
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.6 no.1
    • /
    • pp.59-77
    • /
    • 2003
  • This study is to validate the use of macro language for the land suitability analysis aiming to help to resolve land use conflicts. The silver-town suitability analysis is conducted on the Geejang Gun, Busan Metropolitan city. Such digital maps as terrain, road, facility, and water body were created for various cartographic models. A cartographic model identified the best suitable areas for silver-town development based on the such site selection variables as a distance to facility and road, slope and aspect of terrain, land use etc. Then, the other cartographic model identified the most favorable site among the candidate sites based on the comparison of the aspect of proximity, usage and environmental quality. Macro language was used for these modeling process and was used for the manipulation of all these spatial variables used in the models to resolve land use conflicts relating to the decision making process of the final site selection. This study will improve the effectiveness and rationality of the traditional site suitability analysis.

  • PDF

An Efficient Dynamic Workload Balancing Strategy (PIECES 프레임워크 중심의 요구사항 정제와 우선순위 결정 전략)

  • Jeon, Hye-Young;Byun, Jung-Won;Rhew, Sung-Yul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.117-127
    • /
    • 2012
  • Identifying user requirements efficiently and reflecting them on the existing system is very important in a rapidly changing web and mobile environments. This study proposes the strategies to refining requirements and to prioritizing those refined requirements for changing of web and mobile application based on user requirements (e.g. mobile application comments, Q&A, reported information as discomfort factors). In order to refining the user requirements, those requirements are grouped by using the advancement of the software business of the Forum of standardization and the existing configuration-based programs. Then, we mapped them onto the PIECES framework to identifying whether the refined requirements are correctly reflected to the system in a way of valid and pure. To determine the priority of refined requirements, first, relative weights are given to software structure, requirements and categories of PIECES. Second, integration points on each requirement are counted to obtain the relative value of partial and overall score of a set of software structural requirements. In order to verifying the possibility and proving the effectiveness of proposing technique in this study, survey was conducted on changing requirements of mobile application which have been serviced at S University by targeting 15 people of work-related stakeholders.

A Posterior Preference Articulation Method to the Weighted Mean Squared Error Minimization Approach in Multi-Response Surface Optimization (다중반응표면 최적화에서 가중평균제곱오차 최소화법을 위한 선호도사후제시법)

  • Jeong, In-Jun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.10
    • /
    • pp.7061-7070
    • /
    • 2015
  • Multi-Response Surface Optimization aims at finding the optimal setting of input variables considering multiple responses simultaneously. The Weighted Mean Squared Error (WMSE) minimization approach, which imposes a different weight on the two components of mean squared error, squared bias and variance, first obtains WMSE for each response and then minimizes all the WMSEs at once. Most of the methods proposed for the WMSE minimization approach to date are classified into the prior preference articulation approach, which requires that a decision maker (DM) provides his/her preference information a priori. However, it is quite difficult for the DM to provide such information in advance, because he/she cannot experience the relationships or conflicts among the responses. To overcome this limitation, this paper proposes a posterior preference articulation method to the WMSE minimization approach. The proposed method first generates all (or most) of the nondominated solutions without the DM's preference information. Then, the DM selects the best one from the set of nondominated solutions a posteriori. Its advantage is that it provides an opportunity for the DM to understand the tradeoffs in the entire set of nondominated solutions and effectively obtains the most preferred solution suitable for his/her preference structure.

LiDAR Ground Classification Enhancement Based on Weighted Gradient Kernel (가중 경사 커널 기반 LiDAR 미추출 지형 분류 개선)

  • Lee, Ho-Young;An, Seung-Man;Kim, Sung-Su;Sung, Hyo-Hyun;Kim, Chang-Hun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.29-33
    • /
    • 2010
  • The purpose of LiDAR ground classification is to archive both goals which are acquiring confident ground points with high precision and describing ground shape in detail. In spite of many studies about developing optimized algorithms to kick out this, it is very difficult to classify ground points and describing ground shape by airborne LiDAR data. Especially it is more difficult in a dense forested area like Korea. Principle misclassification was mainly caused by complex forest canopy hierarchy in Korea and relatively coarse LiDAR points density for ground classification. Unfortunately, a lot of LiDAR surveying performed in summer in South Korea. And by that reason, schematic LiDAR points distribution is very different from those of Europe. So, this study propose enhanced ground classification method considering Korean land cover characteristics. Firstly, this study designate highly confident candidated LiDAR points as a first ground points which is acquired by using big roller classification algorithm. Secondly, this study applied weighted gradient kernel(WGK) algorithm to find and include highly expected ground points from the remained candidate points. This study methods is very useful for reconstruct deformed terrain due to misclassification results by detecting and include important terrain model key points for describing ground shape at site. Especially in the case of deformed bank side of river area, this study showed highly enhanced classification and reconstruction results by using WGK algorithm.

Design of a Real Estate Knowledge Information System Based on Semantic Search (시맨틱 검색 기반의 부동산 지식 정보시스템 설계)

  • Cho, Jae-Hyung;Kang, Moo-Hong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.2
    • /
    • pp.111-124
    • /
    • 2011
  • The apartment' share of the housing has steadily increased and property assets have been valued in importance as the one of asset value. Information retrieval system using internet is particularly active in the real estate market. However, user satisfaction on real estate information system is not very high, and there is a lack of research on real estate retrieval to increasing efficiency until now. This study presents a new knowledge information system developed to consider region-related factor and individual-related factor in the real estate market. In addition it enables a real estate knowledge system to search various preferential requirements for buyers such as school district, living convenience, easy maintenance as well as price. We made a survey of the search condition preference of experts on 30 real estate agents and then analyzed the result using AHP methodology. Furthermore, this research is to build apartment ontology using semantic web technologies to standardize various terminologies of apartment information and to show how it can be used to help buyers find apartments of the interest. After designing architecture of a real estate knowledge information system, this system is applied to the Busan real estate market to estimate the solutions of retrieval through Multi-Attribute Decision Making(MADM). Based on the results of the analysis, we endowed the buyer and expert's selected factors with weights in the system. Evaluation results indicate that this new system is to raise not only the value satisfaction of user, but also make it possible to effectively search and analyze the real estate through entropy analysis of MADM. This new system is to raise not only the value satisfaction of buyer's real estate, but also make it possible to effectively search and analyze the related real estate, consequently saving the searching cost of the buyers.

Utilizing the Effect of Market Basket Size for Improving the Practicality of Association Rule Measures (연관규칙 흥미성 척도의 실용성 향상을 위한 장바구니 크기 효과 반영 방안)

  • Kim, Won-Seo;Jeong, Seung-Ryul;Kim, Nam-Gyu
    • The KIPS Transactions:PartD
    • /
    • v.17D no.1
    • /
    • pp.1-8
    • /
    • 2010
  • Association rule mining techniques enable us to acquire knowledge concerning sales patterns among individual items from voluminous transactional data. Certainly, one of the major purposes of association rule mining is utilizing the acquired knowledge to provide marketing strategies such as catalogue design, cross-selling and shop allocation. However, this requires too much time and high cost to only extract the actionable and profitable knowledge from tremendous numbers of discovered patterns. In currently available literature, a number of interest measures have been devised to accelerate and systematize the process of pattern evaluation. Unfortunately, most of such measures, including support and confidence, are prone to yielding impractical results because they are calculated only from the sales frequencies of items. For instance, traditional measures cannot differentiate between the purchases in a small basket and those in a large shopping cart. Therefore, some adjustment should be made to the size of market baskets because there is a strong possibility that mutually irrelevant items could appear together in a large shopping cart. Contrary to the previous approaches, we attempted to consider market basket's size in calculating interest measures. Because the devised measure assigns different weights to individual purchases according to their basket sizes, we expect that the measure can minimize distortion of results caused by accidental patterns. Additionally, we performed intensive computer simulations under various environments, and we performed real case analyses to analyze the correctness and consistency of the devised measure.

Analysis on dam operation effect and development of an function formula and automated model for estimating suitable site (댐의 운영효과 분석과 적지선정 함수식 및 자동화 모형 개발)

  • Choo, Taiho;Kim, Yoonku;Kim, Yeongsik;Yun, Gwanseon
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.3
    • /
    • pp.187-194
    • /
    • 2019
  • Intake ratio from river constitutes about 31% (8/26) that beings to "water stress country" as "Medium ~ High" with China, India, Italy, South Africa, etc. Therefore, the present study on a dam that is the most effective and direct for securing water resources has been performed. First of all, climate change scenarios were investigated and analyzed. RCP 4.5 and 8.5 with 12.5 km grid resolution presented in the IPCC (Intergovernmental Panel on Climate Change) 5th Assessment Report (AR5) were applied to study watershed using SWAT (Soil and Water Assessment Tool) and HEC-ResSim models that carried out co-operation. Based on the results of dam simulation, the reduction effects of floods and droughts were quantitatively presented. The procedures of dam projects of the USA, Japan and Korea were investigated. As a result, there are no estimating quantitative criteria, calculating methods or formulas. In the present study, therefore, indexes for selecting suitable dam site through literature investigation and analyzing dam watersheds were determined, Expert questionnaire for various indexes were performed. Based on the above mentioned investigation and expert questionnaire, a methodology assigning weight using AHP method were proposed. The function of suitable dam (FSDS) site was calibrated and verified for four medium-sized watersheds. Finally, automated model for suitable dam site was developed using FSDS and 'Model builder' of GIS tool.

Scaling of the Individual Differences to Cognize the Image of the City - Focusing on Seong-Nam- (개인차 척도법을 이용한 도시 이미지 인지 경향 연구 - 성남시를 중심으로 -)

  • Byeon, Jae-Sang
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.36 no.4
    • /
    • pp.83-99
    • /
    • 2008
  • Images of Seong-Nam appears different according to diverse conditions. This study was intended to analyze the differences of cognition by personal characteristics such as age, gender, location, and period when an individual evaluates an urban image. This research focused on the interpretation of the visualized results from Multidimensional Scaling (MDS) and Individual Difference Scaling (INDSCAL) with two questionnaires. This study can be summarized as follows: 1. Namhan Sansung was ranked as the first symbolic property by citizens in Seong-Nam. Next was Yuldong Park, followed by Bundang Central Park, Seohyun Station including Samsung Plaza, and, finally, Moran Market. This trend also similarly appeared in the selection of preferred places. 2. There were no statistical differences in trends of choice of symbolic landmarks and preferred places according to age, gender, and period; however, there were meaningful differences according to location. 3. The total image of Seong-Nam was positioned to be separated from images of other districts and landmarks on the image spatial plot by MDS; however, images of the old and new district were plotted close to symbolic landmarks where located around each district. 4. INDSCAL illustrated that men weighted the historical meaning while women weighted preference and city size when evaluating an urban image. On the other hand, there was no difference in cognitive trends according to age, location, and period. Until now, an individual difference in the cognition and evaluation of an urban image was a socially accepted notion. However, this study verified the difference according to personal characteristics and developed a practical tool to analyze an individual cognition trend about a city image.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.