• Title/Summary/Keyword: decision map

Search Result 415, Processing Time 0.034 seconds

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

Study of Rainfall-Runoff Variation by Grid Size and Critical Area (격자크기와 임계면적에 따른 홍수유출특성 변화)

  • Ahn, Seung-Seop;Lee, Jeung-Seok;Jung, Do-Joon;Han, Ho-Chul
    • Journal of Environmental Science International
    • /
    • v.16 no.4
    • /
    • pp.523-532
    • /
    • 2007
  • This study utilized the 1/25,000 topographic map of the upper area from the Geum-ho watermark located at the middle of Geum-ho river from the National Geographic Information Institute. For the analysis, first, the influence of the size of critical area to the hydro topographic factors was examined changing grid size to $10m{\times}10m,\;30m{\times}30m\;and\;50m{\times}50m$, and the critical area for the formation of a river to $0.01km^2{\sim}0.50km^2$. It is known from the examination result of watershed morphology according to the grid size that the smaller grid size, the better resolution and accuracy. And it is found, from the analysis result of the degree of the river according to the minimum critical area for each grid size, that the grid size does not affect on the degree of the river, and the number of rivers with 2nd and higher degree does not show remarkable difference while there is big difference in the number of 1st degree rivers. From the results above, it is thought that the critical area of $0.15km^2{\sim}0.20km^2$ is appropriate for formation of a river being irrelevant to the grid size in extraction of hydro topographic parameters that are used in the runoff analysis model using topographic maps. Therefore, the GIUH model applied analysis results by use of the river level difference law proposed in this study for the explanation on the outflow response-changing characters according to the decision of a critical value of a minimum level difference river, showed that, since an ogival occurrence time and an ogival flow volume are very significant in a flood occurrence in case of not undertow facilities, the researcher could obtain a good result for the forecast of river outflow when considering a convenient application of the model and an easy acquisition of data, so it's judged that this model is proper as an algorism for the decision of a critical value of a river basin.

Accessibility Analysis in Mapping Cultural Ecosystem Service of Namyangju-si (접근성 개념을 적용한 문화서비스 평가 -남양주시를 대상으로-)

  • Jun, Baysok;Kang, Wanmo;Lee, Jaehyuck;Kim, Sunghoon;Kim, Byeori;Kim, Ilkwon;Lee, Jooeun;Kwon, Hyuksoo
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.4
    • /
    • pp.367-377
    • /
    • 2018
  • A cultural ecosystem service(CES), which is non-material benefit that human gains from ecosystem, has been recently further recognized as gross national income increases. Previous researches proposed to quantify the value of CES, which still remains as a challenging issue today due to its social and cultural subjectivity. This study proposes new way of assessing CES which is called Cultural Service Opportunity Spectrum(CSOS). CSOS is accessibility based CES assessment methodology for regional scale and it is designed to be applicable for any regions in Korea for supporting decision making process. CSOS employed public spatial data which are road network and population density map. In addition, the results of 'Rapid Assessment of Natural Assets' implemented by National Institute of Ecology, Korea were used as a complementary data. CSOS was applied to Namyangju-si and the methodology resulted in revealing specific areas with great accessibility to 'Natural Assets' in the region. Based on the results, the advantages and limitations of the methodology were discussed with regard to weighting three main factors and in contrast to Scenic Quality model and Recreation model of InVEST which have been commonly used for assessing CES today due to its convenience today.

Strategy for Store Management Using SOM Based on RFM (RFM 기반 SOM을 이용한 매장관리 전략 도출)

  • Jeong, Yoon Jeong;Choi, Il Young;Kim, Jae Kyeong;Choi, Ju Choel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.93-112
    • /
    • 2015
  • Depending on the change in consumer's consumption pattern, existing retail shop has evolved in hypermarket or convenience store offering grocery and daily products mostly. Therefore, it is important to maintain the inventory levels and proper product configuration for effectively utilize the limited space in the retail store and increasing sales. Accordingly, this study proposed proper product configuration and inventory level strategy based on RFM(Recency, Frequency, Monetary) model and SOM(self-organizing map) for manage the retail shop effectively. RFM model is analytic model to analyze customer behaviors based on the past customer's buying activities. And it can differentiates important customers from large data by three variables. R represents recency, which refers to the last purchase of commodities. The latest consuming customer has bigger R. F represents frequency, which refers to the number of transactions in a particular period and M represents monetary, which refers to consumption money amount in a particular period. Thus, RFM method has been known to be a very effective model for customer segmentation. In this study, using a normalized value of the RFM variables, SOM cluster analysis was performed. SOM is regarded as one of the most distinguished artificial neural network models in the unsupervised learning tool space. It is a popular tool for clustering and visualization of high dimensional data in such a way that similar items are grouped spatially close to one another. In particular, it has been successfully applied in various technical fields for finding patterns. In our research, the procedure tries to find sales patterns by analyzing product sales records with Recency, Frequency and Monetary values. And to suggest a business strategy, we conduct the decision tree based on SOM results. To validate the proposed procedure in this study, we adopted the M-mart data collected between 2014.01.01~2014.12.31. Each product get the value of R, F, M, and they are clustered by 9 using SOM. And we also performed three tests using the weekday data, weekend data, whole data in order to analyze the sales pattern change. In order to propose the strategy of each cluster, we examine the criteria of product clustering. The clusters through the SOM can be explained by the characteristics of these clusters of decision trees. As a result, we can suggest the inventory management strategy of each 9 clusters through the suggested procedures of the study. The highest of all three value(R, F, M) cluster's products need to have high level of the inventory as well as to be disposed in a place where it can be increasing customer's path. In contrast, the lowest of all three value(R, F, M) cluster's products need to have low level of inventory as well as to be disposed in a place where visibility is low. The highest R value cluster's products is usually new releases products, and need to be placed on the front of the store. And, manager should decrease inventory levels gradually in the highest F value cluster's products purchased in the past. Because, we assume that cluster has lower R value and the M value than the average value of good. And it can be deduced that product are sold poorly in recent days and total sales also will be lower than the frequency. The procedure presented in this study is expected to contribute to raising the profitability of the retail store. The paper is organized as follows. The second chapter briefly reviews the literature related to this study. The third chapter suggests procedures for research proposals, and the fourth chapter applied suggested procedure using the actual product sales data. Finally, the fifth chapter described the conclusion of the study and further research.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Control Policy for the Land Remote Sensing Industry (미국(美國)의 지상원격탐사(地上遠隔探査) 통제제탁(統制制度))

  • Suh, Young-Duk
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.20 no.1
    • /
    • pp.87-107
    • /
    • 2005
  • Land Remote Sensing' is defined as the science (and to some extent, art) of acquiring information about the Earth's surface without actually being in contact with it. Narrowly speaking, this is done by sensing and recording reflected or emitted energy and processing, analyzing, and applying that information. Remote sensing technology was initially developed with certain purposes in mind ie. military and environmental observation. However, after 1970s, as these high-technologies were taught to private industries, remote sensing began to be more commercialized. Recently, we are witnessing a 0.61-meter high-resolution satellite image on a free market. While privatization of land remote sensing has enabled one to use this information for disaster prevention, map creation, resource exploration and more, it can also create serious threat to a sensed nation's national security, if such high resolution images fall into a hostile group ie. terrorists. The United States, a leading nation for land remote sensing technology, has been preparing and developing legislative control measures against the remote sensing industry, and has successfully created various policies to do so. Through the National Oceanic and Atmospheric Administration's authority under the Land Remote Sensing Policy Act, the US can restrict sensing and recording of resolution of 0.5 meter or better, and prohibit distributing/circulating any images for the first 24 hours. In 1994, Presidential Decision Directive 23 ordered a 'Shutter Control' policy that details heightened level of restriction from sensing to commercializing such sensitive data. The Directive 23 was even more strengthened in 2003 when the Congress passed US Commercial Remote Sensing Policy. These policies allow Secretary of Defense and Secretary of State to set up guidelines in authorizing land remote sensing, and to limit sensing and distributing satellite images in the name of the national security - US government can use the civilian remote sensing systems when needed for the national security purpose. The fact that the world's leading aerospace technology country acknowledged the magnitude of land remote sensing in the context of national security, and it has made and is making much effort to create necessary legislative measures to control the powerful technology gives much suggestions to our divided Korean peninsula. We, too, must continue working on the Korea National Space Development Act and laws to develop the necessary policies to ensure not only the development of space industry, but also to ensure the national security.

  • PDF

Correlation among Ownership of Home Appliances Using Multivariate Probit Model (다변량 프로빗 모형을 이용한 가전제품 구매의 상관관계 분석)

  • Kim, Chang-Seob;Shin, Jung-Woo;Lee, Mi-Suk;Lee, Jong-Su
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.2
    • /
    • pp.17-26
    • /
    • 2009
  • As the lifestyle of consumers changes and the need for various products increases, new products are being developed in the market. Each household owns various home appliances which are purchased through the choice of a decision maker. These appliances include not only large-sized products such as TV, refrigerator, and washing machine, but also small-sized products such as microwave oven and air cleaner. There exists latent correlation among possession of home appliances, even though they are purchased independently. The purpose of this research is to analyze the effect of demographic factors on the purchase and possession of each home appliances, and to derive some relationships among various appliances. To achieve this purpose, the present status on the possession of each home appliances are investigated through consumer survey data on the electric and energy product. And a multivariate probit(MVP) model is applied for the empirical analysis. From the estimation results, some appliances show a substitutive or complementary pattern as expected, while others which look apparently unrelated have correlation by co-incidence. This research has several advantages compared to previous literatures on home appliances. First, this research focuses on the various products which are purchased by each household, while previous researches such as Matsukawa and Ito(1998) and Yoon(2007) focus just on a particular product. Second, the methodology of this research can consider a choice process of each product and correlation among products simultaneously. Lastly, this research can analyze not only a substitutive or complementary relationship in the same category, but also the correlation among products in the different categories. As the data on the possession of home appliances in each household has a characteristic of multiple choice, not a single choice, a MVP model are used for the empirical analysis. A MVP model is derived from a random utility model, and has an advantage compared to a multinomial logit model in that correlation among error terms can be derive(Manchanda et al., 1999; Edwards and Allenby, 2003). It is assumed that the error term has a normal distribution with zero mean and variance-covariance matrix ${\Omega}$. Hence, the sign and value of correlation coefficients means the relationship between two alternatives(Manchanda et al., 1999). This research uses the data of 'TEMEP Household ICT/Energy Survey (THIES) 2008' which is conducted by Technology Management, Economics and Policy Program in Seoul National University. The empirical analysis of this research is accomplished in two steps. First, a MVP model with demographic variables is estimated to analyze the effect of the characteristics of household on the purchase of each home appliances. In this research, some variables such as education level, region, size of family, average income, type of house are considered. Second, a MVP model excluding demographic variables is estimated to analyze the correlation among each home appliances. According to the estimation results of variance-covariance matrix, each households tend to own some appliances such as washing machine-refrigerator-cleaner-microwave oven, and air conditioner-dish washer-washing machine and so on. On the other hand, several products such as analog braun tube TV-digital braun tube TV and desktop PC-portable PC show a substitutive pattern. Lastly, the correlation map of home appliances are derived using multi-dimensional scaling(MDS) method based on the result of variance-covariance matrix. This research can provide significant implications for the firm's marketing strategies such as bundling, pricing, display and so on. In addition, this research can provide significant information for the development of convergence products and related technologies. A convergence product can decrease its market uncertainty, if two products which consumers tend to purchase together are integrated into it. The results of this research are more meaningful because it is based on the possession status of each household through the survey data.

  • PDF

Landslide Vulnerability Mapping considering GCI(Geospatial Correlative Integration) and Rainfall Probability In Inje (GCI(Geospatial Correlative Integration) 및 확률강우량을 고려한 인제지역 산사태 취약성도 작성)

  • Lee, Moung-Jin;Lee, Sa-Ro;Jeon, Seong-Woo;Kim, Geun-Han
    • Journal of Environmental Policy
    • /
    • v.12 no.3
    • /
    • pp.21-47
    • /
    • 2013
  • The aim is to analysis landslide vulnerability in Inje, Korea, using GCI(Geospatial Correlative Integration) and probability rainfalls based on geographic information system (GIS). In order to achieve this goal, identified indicators influencing landslides based on literature review. We include indicators of exposure to climate(rainfall probability), sensitivity(slope, aspect, curvature, geology, topography, soil drainage, soil material, soil thickness and soil texture) and adaptive capacity(timber diameter, timber type, timber density and timber age). All data were collected, processed, and compiled in a spatial database using GIS. Karisan-ri that had experienced 470 landslides by Typhoon Ewinia in 2006 was selected for analysis and verification. The 50% of landslide data were randomly selected to use as training data, while the other 50% being used for verification. The probability of landslides for target years (1 year, 3 years, 10 years, 50 years, and 100 years) was calculated assuming that landslides are triggered by 3-day cumulative rainfalls of 449 mm. Results show that number of slope has comparatively strong influence on landslide damage. And inclination of $25{\sim}30^{\circ}C$, the highest correlation landslide. Improved previous landslide vulnerability methodology by adopting GCI. Also, vulnerability map provides meaningful information for decision makers regarding priority areas for implementing landslide mitigation policies.

  • PDF

The PRISM-based Rainfall Mapping at an Enhanced Grid Cell Resolution in Complex Terrain (복잡지형 고해상도 격자망에서의 PRISM 기반 강수추정법)

  • Chung, U-Ran;Yun, Kyung-Dahm;Cho, Kyung-Sook;Yi, Jae-Hyun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.2
    • /
    • pp.72-78
    • /
    • 2009
  • The demand for rainfall data in gridded digital formats has increased in recent years due to the close linkage between hydrological models and decision support systems using the geographic information system. One of the most widely used tools for digital rainfall mapping is the PRISM (parameter-elevation regressions on independent slopes model) which uses point data (rain gauge stations), a digital elevation model (DEM), and other spatial datasets to generate repeatable estimates of monthly and annual precipitation. In the PRISM, rain gauge stations are assigned with weights that account for other climatically important factors besides elevation, and aspects and the topographic exposure are simulated by dividing the terrain into topographic facets. The size of facet or grid cell resolution is determined by the density of rain gauge stations and a $5{\times}5km$ grid cell is considered as the lowest limit under the situation in Korea. The PRISM algorithms using a 270m DEM for South Korea were implemented in a script language environment (Python) and relevant weights for each 270m grid cell were derived from the monthly data from 432 official rain gauge stations. Weighted monthly precipitation data from at least 5 nearby stations for each grid cell were regressed to the elevation and the selected linear regression equations with the 270m DEM were used to generate a digital precipitation map of South Korea at 270m resolution. Among 1.25 million grid cells, precipitation estimates at 166 cells, where the measurements were made by the Korea Water Corporation rain gauge network, were extracted and the monthly estimation errors were evaluated. An average of 10% reduction in the root mean square error (RMSE) was found for any months with more than 100mm monthly precipitation compared to the RMSE associated with the original 5km PRISM estimates. This modified PRISM may be used for rainfall mapping in rainy season (May to September) at much higher spatial resolution than the original PRISM without losing the data accuracy.

Environmental Assessment and Decision of Remediation Scope for Arsenic Contaminated Farmland Soils and River Deposits Around Goro Abandoned Mine, Korea (토양 정밀 조사에 의한 고로폐광산 주변 비소오염 토양 및 하천퇴적토의 오염도 평가 및 오염 토양 복원 규모 설정)

  • 차종철;이정산;이민희
    • Economic and Environmental Geology
    • /
    • v.36 no.6
    • /
    • pp.457-467
    • /
    • 2003
  • Soil Precise Investigation(SPI) for river deposits and farmland soils around Goro abandoned Zn-mine, Korea was performed to assess the pollution level of heavy metals(As. Pb, Cd, Cu) and to estimate the remediation volume for contaminated soils. Total investigation area was about 950000 $m^2$, which was divided into each section of 1500 $m^2$ corresponding to one sampling site and 545 samples for surface soil(0-10cm in depth) and 192 samples for deep soil(10-30cm in depth) from the investigation area were collected for analysis. Concentrations of Cu, Cd, Pb at all sample sites were shown to be lower than Soil Pollution Warning Limit(SPWL). For arsenic concentration, in surface soils, 20.5% of sample sites(104 sites) were over SPWL(6mg/kg) and 6.7%(34 sites) were over Soil Pollution Counterplan Limit(SPCL: 15mg/kg) suggesting that surface soils were broadly contaminated by As. For deep soils, 10.4% of sample sites(18 sites) were over SPWL and 0.6%(1 site) were over SPCL. Four pollution grades for sample locations were prescribed by the Law of Soil Environmental Preservation and Pollution Index(PI) for each soil sample was decided according to pollution grades(over 15.0 mg/kg, 6.00-15.00 mg/kg, 2.40-6.00 mg/kg, 1.23-6.00 mg/kg). The pollution contour map around Goro mine based on PI results was finally created to calculate the contaminated area and the remediation volume for contaminated soils. Remediation area with over SPWL concentration was about 0.3% of total area between Goro mine and a projected storage dam and 0.9% of total area was over 40% of SPWL. If the remediation target concentration was determined to over background level concentration, 1.1% of total area should be treated for remediation. Total soil volume to be treated for remediation was estimated on the assumption that the thickness of contaminated soil was 30cm. Soil volume to be remediated based on the excess of SPWL was estimated at 79,200$m^3$, soil volume exceeding 40% of SPWL was about 233,700 $m^3$, and soil volume exceeding the background level(1.23 mg/kg) was 290,760 TEX>$m^3$.