• 제목/요약/키워드: Crisp data

검색결과 70건 처리시간 0.026초

Fast Fuzzy Control of Warranty Claims System

  • Lee, Sang-Hyun;Cho, Sung-Eui;Moon, Kyung-Li
    • Journal of Information Processing Systems
    • /
    • 제6권2호
    • /
    • pp.209-218
    • /
    • 2010
  • Classical warranty plans require crisp data obtained from strictly controlled reliability tests. However, in a real situation these requirements might not be fulfilled. In an extreme case, the warranty claims data come from users whose reports are expressed in a vague way. Furthermore, there are special situations where several characteristics are used together as criteria for judging the warranty eligibility of a failed product. This paper suggests a fast reasoning model based on fuzzy logic to handle multi-attribute and vague warranty data.

A Southeast Asia Environmental Information Web Portal

  • Low, John;Liew, Soo-Chin;Lim, Agnes;Chang, Chew-Wai;Kwoh, Leong-Keong
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1006-1008
    • /
    • 2003
  • In this paper, we describe the development of a Southeast Asia environmental information web portal based on near real time MODIS Level 2 and higher level products generated from the direct broadcast data received at the Centre for Remote Imaging, Sensing and Processing (CRISP). This web portal aims to deliver timely environmental information to interested users in the region. Interpreted data will be provided instead of raw satellite data to reduce operational requirements on our system, and to enable users with limited bandwidths to have access to the system.

  • PDF

A study of creative human judgment through the application of machine learning algorithms and feature selection algorithms

  • Kim, Yong Jun;Park, Jung Min
    • International journal of advanced smart convergence
    • /
    • 제11권2호
    • /
    • pp.38-43
    • /
    • 2022
  • In this study, there are many difficulties in defining and judging creative people because there is no systematic analysis method using accurate standards or numerical values. Analyze and judge whether In the previous study, A study on the application of rule success cases through machine learning algorithm extraction, a case study was conducted to help verify or confirm the psychological personality test and aptitude test. We proposed a solution to a research problem in psychology using machine learning algorithms, Data Mining's Cross Industry Standard Process for Data Mining, and CRISP-DM, which were used in previous studies. After that, this study proposes a solution that helps to judge creative people by applying the feature selection algorithm. In this study, the accuracy was found by using seven feature selection algorithms, and by selecting the feature group classified by the feature selection algorithms, and the result of deriving the classification result with the highest feature obtained through the support vector machine algorithm was obtained.

Application of AI-based Customer Segmentation in the Insurance Industry

  • Kyeongmin Yum;Byungjoon Yoo;Jaehwan Lee
    • Asia pacific journal of information systems
    • /
    • 제32권3호
    • /
    • pp.496-513
    • /
    • 2022
  • Artificial intelligence or big data technologies can benefit finance companies such as those in the insurance sector. With artificial intelligence, companies can develop better customer segmentation methods and eventually improve the quality of customer relationship management. However, the application of AI-based customer segmentation in the insurance industry seems to have been unsuccessful. Findings from our interviews with sales agents and customer service managers indicate that current customer segmentation in the Korean insurance company relies upon individual agents' heuristic decisions rather than a generalizable data-based method. We propose guidelines for AI-based customer segmentation for the insurance industry, based on the CRISP-DM standard data mining project framework. Our proposed guideline provides new insights for studies on AI-based technology implementation and has practical implications for companies that deploy algorithm-based customer relationship management systems.

Effects of Uncertain Spatial Data Representation on Multi-source Data Fusion: A Case Study for Landslide Hazard Mapping

  • Park No-Wook;Chi Kwang-Hoon;Kwon Byung-Doo
    • 대한원격탐사학회지
    • /
    • 제21권5호
    • /
    • pp.393-404
    • /
    • 2005
  • As multi-source spatial data fusion mainly deal with various types of spatial data which are specific representations of real world with unequal reliability and incomplete knowledge, proper data representation and uncertainty analysis become more important. In relation to this problem, this paper presents and applies an advanced data representation methodology for different types of spatial data such as categorical and continuous data. To account for the uncertainties of both categorical data and continuous data, fuzzy boundary representation and smoothed kernel density estimation within a fuzzy logic framework are adopted, respectively. To investigate the effects of those data representation on final fusion results, a case study for landslide hazard mapping was carried out on multi-source spatial data sets from Jangheung, Korea. The case study results obtained from the proposed schemes were compared with the results obtained by traditional crisp boundary representation and categorized continuous data representation methods. From the case study results, the proposed scheme showed improved prediction rates than traditional methods and different representation setting resulted in the variation of prediction rates.

A Study on Data Classification of Raman OIM Hyperspectral Bone Data

  • Jung, Sung-Hwan
    • 한국멀티미디어학회논문지
    • /
    • 제14권8호
    • /
    • pp.1010-1019
    • /
    • 2011
  • This was a preliminary research for the goal of understanding between internal structure of Osteogenesis Imperfecta Murine (OIM) bone and its fragility. 54 hyperspectral bone data sets were captured by using JASCO 2000 Raman spectrometer at UMKC-CRISP (University of Missouri-Kansas City Center for Research on Interfacial Structure and Properties). Each data set consists of 1,091 data points from 9 OIM bones. The original captured hyperspectral data sets were noisy and base-lined ones. We removed the noise and corrected the base-lined data for the final efficient classification. High dimensional Raman hyperspectral data on OIM bones was reduced by Principal Components Analysis (PCA) and Linear Discriminant Analysis (LDA) and efficiently classified for the first time. We confirmed OIM bones could be classified such as strong, middle and weak one by using the coefficients of their PCA or LDA. Through experiment, we investigated the efficiency of classification on the reduced OIM bone data by the Bayesian classifier and K -Nearest Neighbor (K-NN) classifier. As the experimental result, the case of LDA reduction showed higher classification performance than that of PCA reduction in the two classifiers. K-NN classifier represented better classification rate, compared with Bayesian classifier. The classification performance of K-NN was about 92.6% in case of LDA.

Data Pattern Estimation with Movement of the Center of Gravity

  • Ahn Tae-Chon;Jang Kyung-Won;Shin Dong-Du;Kang Hak-Soo;Yoon Yang-Woong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권3호
    • /
    • pp.210-216
    • /
    • 2006
  • In the rule based modeling, data partitioning plays crucial role be cause partitioned sub data set implies particular information of the given data set or system. In this paper, we present an empirical study result of the data pattern estimation to find underlying data patterns of the given data. Presented method performs crisp type clustering with given n number of data samples by means of the sequential agglomerative hierarchical nested model (SAHN). In each sequence, the average value of the sum of all inter-distance between centroid and data point. In the sequel, compute the derivation of the weighted average distance to observe a pattern distribution. For the final step, after overall clustering process is completed, weighted average distance value is applied to estimate range of the number of clusters in given dataset. The proposed estimation method and its result are considered with the use of FCM demo data set in MATLAB fuzzy logic toolbox and Box and Jenkins's gas furnace data.

GSIS를 이용한 입지선정에 있어 퍼지공간중첩기법의 적용에 관한 연구 (The application of fuzzy spatial overlay method to the site selection using GSIS)

  • 임승현;조기성
    • 한국측량학회지
    • /
    • 제17권2호
    • /
    • pp.177-187
    • /
    • 1999
  • 현재까지 GSIS를 이용하는 많은 응용분야에서 각종 공간자료의 추출 및 분석을 위해 벡터형 공간중첩(spatial overay)이나 격자형 공간연산(spatial algebra)기능이 주로 사용되었다. 하지만 이런 방법에 내재하고 있는 개념은 전통적인 보통집합이론에 근거하고 있기 때문에 많은 종류의 공간자료들이 구간설정에 있어서 예리한 경계로 분할되는 것으로 다루어지고 있다. 이것은 현실 세계에 존재하는 실제 자료들의 공간분포패턴과 일치하지 않는다. 즉, 공간상에 일정영역이나 실체들이 오직 한가지 속성으로 한정되는(one-entity-one-value)오류를 그대로 포함하고 있다. 본 연구는 이러한 보통집합의 개념하에서 공간자료를 다루어 왔던 종래의 방식을 개선하기 위해서 공간자료가 지니는 모호함 내지 경계의 애매성을 잘 표현할 수 있는 퍼지집합의 개념을 두 가지 방법을 통해 공간중첩과정에 도입하였다. 첫 번째 방법은 공간적으로 연속성을 갖는 자료에 대해서 퍼지부분집합에 의한 퍼지구간분할법이며, 두 번째 방법은 범주형 자료에 대해서 적용한 퍼지경계집합법이다. 사례연구로서 신시가지 개발입지선정을 위한 적지분석을 수행을 함으로서 기존의 부울분석방법과 퍼지 공간 중첩법의 결과를 비교하였으며 그 결과, 퍼지공간중첩법에 의한 적합도면이 신시가지 개발입지에 대한 보다 타당성 있는 정보를 제공하며, 더불어 정보표현측면에서도 더욱 적절한 형태임을 알 수 있었다.

  • PDF

FUZZY REGRESSION ANALYSIS WITH NON-SYMMETRIC FUZZY COEFFICIENTS BASED ON QUADRATIC PROGRAMMING APPROACH

  • Lee, Haekwan;Hideo Tanaka
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 The Third Asian Fuzzy Systems Symposium
    • /
    • pp.63-68
    • /
    • 1998
  • This paper proposes fuzzy regression analysis with non-symmetric fuzzy coefficients. By assuming non-symmetric triangular fuzzy coefficients and applying the quadratic programming fomulation, the center of the obtained fuzzy regression model attains more central tendency compared to the one with symmetric triangular fuzzy coefficients. For a data set composed of crisp inputs-fuzzy outputs, two approximation models called an upper approximation model and a lower approximation model are considered as the regression models. Thus, we also propose an integrated quadratic programming problem by which the upper approximation model always includes the lower approximation model at any threshold level under the assumption of the same centers in the two approximation models. Sensitivities of Weight coefficients in the proposed quadratic programming approaches are investigated through real data.

  • PDF

Introducing 'Meta-Network': A New Concept in Network Technology

  • Gaur, Deepti;Shastri, Aditya;Biswas, Ranjit
    • Journal of information and communication convergence engineering
    • /
    • 제6권4호
    • /
    • pp.470-474
    • /
    • 2008
  • A well-designed computer network technology produces benefits on several fields within the organization, between the organizations(suborganizations) or among different organizations(suborganizations). Network technology streamlines business processes, decision process. Graphs are useful data structures capable of efficiently representing a variety of networks in the various fields. Metagraph is a like graph theoretic construct introduced recently by Basu and Blanning in which there is set to set mapping in place of node to node as in a conventional graph structure. Metagraph is thus a new type of data structure occupying its popularity among the computer scientists very fast. Every graph is special case of Metagraph. In this paper the authors introduce the notion of Meta-Networking as a new network technological representation, which is having all the capabilities of crisp network as well as few additional capabilities. It is expected that the notion of meta-networking will have huge applications in due course. This paper will play the role of introducing this new concept to the network technologists and scientists.