• Title/Summary/Keyword: design power

Search Result 17,131, Processing Time 0.052 seconds

An Optimization Study on a Low-temperature De-NOx Catalyst Coated on Metallic Monolith for Steel Plant Applications (제철소 적용을 위한 저온형 금속지지체 탈질 코팅촉매 최적화 연구)

  • Lee, Chul-Ho;Choi, Jae Hyung;Kim, Myeong Soo;Seo, Byeong Han;Kang, Cheul Hui;Lim, Dong-Ha
    • Clean Technology
    • /
    • v.27 no.4
    • /
    • pp.332-340
    • /
    • 2021
  • With the recent reinforcement of emission standards, it is necessary to make efforts to reduce NOx from air pollutant-emitting workplaces. The NOx reduction method mainly used in industrial facilities is selective catalytic reduction (SCR), and the most commercial SCR catalyst is the ceramic honeycomb catalyst. This study was carried out to reduce the NOx emitted from steel plants by applying De-NOx catalyst coated on metallic monolith. The De-NOx catalyst was synthesized through the optimized coating technique, and the coated catalyst was uniformly and strongly adhered onto the surface of the metallic monolith according to the air jet erosion and bending test. Due to the good thermal conductivity of metallic monolith, the De-NOx catalyst coated on metallic monolith showed good De-NOx efficiency at low temperatures (200 ~ 250 ℃). In addition, the optimal amount of catalyst coating on the metallic monolith surface was confirmed for the design of an economical catalyst. Based on these results, the De-NOx catalyst of commercial grade size was tested in a semi-pilot De-NOx performance facility under a simulated gas similar to the exhaust gas emitted from a steel plant. Even at a low temperature (200 ℃), it showed excellent performance satisfying the emission standard (less than 60 ppm). Therefore, the De-NOx catalyst coated metallic monolith has good physical and chemical properties and showed a good De-NOx efficiency even with the minimum amount of catalyst. Additionally, it was possible to compact and downsize the SCR reactor through the application of a high-density cell. Therefore, we suggest that the proposed De-NOx catalyst coated metallic monolith may be a good alternative De-NOx catalyst for industrial uses such as steel plants, thermal power plants, incineration plants ships, and construction machinery.

The Effect of Social Capital on Health-related Quality of Life - Using the Data of the 2019 Community Health Survey - (노인의 사회적 자본이 건강 관련 삶의 질에 미치는 영향 - 2019년 지역사회건강조사를 중심으로 -)

  • Kim, Ji-Hee;Park, Jong
    • Journal of agricultural medicine and community health
    • /
    • v.46 no.4
    • /
    • pp.280-294
    • /
    • 2021
  • Objectives: The purpose of this study is to investigate the effects of social capital characteristics, socio-demographic characteristics, physical condition, and health behavior characteristics on health-related quality of life of the elderly in Korea. Methods: T-test, one-way ANOVA, and regression analysis were performed by applying a complex sample design to 57.787 people aged 65 and over using the 2019 Community Health Survey. Results: First, as a result of complex-sample T-test and ANOVA analysis, it was found that there were differences in health-related quality of life according to social capital characteristics, physical condition & health behavior characteristics, and socio-demographic characteristics. Complex Sample Regression Analysis Results, the explanatory power of the model was 28%. When living in the metropolitan area, living in an apartment building, having a spouse, having a higher household income, economic activity, higher educational attainment, increase sleeping time, walking time, frequent binge drinking, health checkup, networking, trust, and social participation showed higher health-related quality of life. When people were older, their gender was female, higher BMI, number of chronic diseases, and severe stress that showed lower health-related quality of life. Conclusions: It was proved that the factors affecting the health-related quality of life of the elderly are not only physical condition and health behavior factors, but also social capital and socio-demographic characteristics. It was found that the role as a member was important.

The Clinical Utility of Korean Bayley Scales of Infant and Toddler Development-III - Focusing on using of the US norm - (베일리영유아발달검사 제3판(Bayley-III)의 미국 규준 적용의 문제: 미숙아 집단을 대상으로)

  • Lim, Yoo Jin;Bang, Hee Jeong;Lee, Soonhang
    • Korean journal of psychology:General
    • /
    • v.36 no.1
    • /
    • pp.81-107
    • /
    • 2017
  • The study aims to investigate the clinical utility of Bayley-III using US norm in Korea. A total of 98 preterm infants and 93 term infants were assessed with the K-Bayley-III. The performance pattern of preterm infants was analyzed with mixed design ANOVA which examined the differences of scaled scores and composite scores of Bayley-III between full term- and preterm- infant group and within preterm infants group. Then, We have investigated agreement between classifications of delay made using the BSID-II and Bayley-III. In addition, ROC plots were constructed to identify a Bayley-III cut-off score with optimum diagnostic utility in this sample. The results were as follows. (1) Preterm infants have significantly lower function levels in areas of 5 scaled scores and 3 developmental indexes compared with infants born at term. Significant differences among scores within preterm infant group were also found. (2) Bayley-III had the higher scores of the Mental Development Index and Psychomotor Developmental Index comparing to the scores of K-BSID-II, and had the lower rates of developmental delay. (3) All scales of Bayley-III, Cognitive, Language and Motor scale had the appropriate level of discrimination, but the cut-off composite scores of Bayley-III were adjusted 13~28 points higher than 69 for prediction of delay, as defined by the K-BSID-II. It explains the lower rates of developmental delay using the standard of two standard deviation. This study has provided empirical data to inform that we must careful when interpreting the score for clinical applications, identified the discriminating power, and proposed more appropriate cut-off scores. In addition, discussion about the sampling for making the Korean norm of Bayley-III was provided. It is preferable that infants in Korea should use our own validated norms. The standardization process to get Korean normative data must be performed carefully.

Isolation and Physicochemical Properties of Rice Starch from Rice Flour using Protease (단백질분해효소에 의한 쌀가루로부터 쌀전분의 분리 및 물리화학적 특성)

  • Kim, ReeJae;Oh, Jiwon;Kim, Hyun-Seok
    • Food Engineering Progress
    • /
    • v.23 no.3
    • /
    • pp.193-199
    • /
    • 2019
  • This study aimed to investigate the impact of protease treatments on the yield of rice starch (RST) from frozen rice flours, and to compare the physicochemical properties of RST by alkaline steeping (control) and enzymatic isolation (E-RST) methods. Although the yield of E-RST, prepared according to conditions designed by the modified 23 complete factorial design, was lower than the control, the opposite trends were observed in its purity. E-RST (RST1, isolated for 8 h at 15℃ with 0.5% protease; RST2, isolated for 24 h at 15℃ with 1.5% protease; RST3, isolated for 24 h at 15℃ with 0.5% protease) with the yields above 50% were selected. Amylose contents did not significantly differ for the control and RST2. Relative to the control, solubilities were higher for all E-RST, but swelling power did not significantly differ for E-RST except for RST1. Although all E-RST revealed higher gelatinization temperatures than the control, the reversed trends were found in the gelatinization enthalpy. The pasting viscosities of all E-RST were lower than those of the control. Consequently, the enzymatic isolation method using protease would be a more time-saving and eco-friendly way of preparing RST than the alkaline steeping method, even though its characteristics are different.

Korean Buddhist Pictures and Performances-Focused on Ttangseolbeop performed at Samcheok Anjeongsa Temple (한국의 불교그림과 공연 - 삼척 안정사에서 연행되는 땅설법을 중심으로 -)

  • Kim, Hyung-Kun
    • (The) Research of the performance art and culture
    • /
    • no.41
    • /
    • pp.219-255
    • /
    • 2020
  • This article was triggered by Victor H. Mair's book 'Painting and performance'. The book explained that Buddhist paintings are common in the area where Buddhism was spread, and there are also performances using them. And although it has nothing to do with Buddhism, it has been shown that this form of performance can be global. However, the problem was not 'Korea'. It was because there was no record or transmission of the corresponding performance soon. In this situation, the landing method of Samcheok stable temple was announced in 2018. On the one hand, the academic community is very pleased, but on the other hand, it is troubled. The worries are summed up as 'synchronic and diachronic universality'. Is the landing method inherited from the Samcheok stable temple a unique type of temple? Otherwise, it is a question of whether it has been passed down or is it universal at the national level. However, prior to this essential question, we do not yet know the full picture of the stable landing method. So this article was prepared to show the overall outline of the stable landing method. There is a 'picture' in common throughout the landing method, and understanding how to operate it in various ways is the first step in understanding the landing method. There are five repertoires (which are called main halls) that are considered important, and more than that. What these repertoires have in common is the narrative structure of a Buddhist character. In this narrative, the most important thing is the revised figure, and it was the earthly method to inform the contents of the revised figure in various ways. In the case of Byeonsangdo, which serves as a clue to the narrative, there was a problem that could not be seen in the evening without light, which required special design. It is the way of shadow play and Yeongdeung. In other words, there are three types of performances in the landing method. The first is the method of using reparation, and the second is the method of using shadow. The third is the way of eternity. This method is not a selection based on the contents of the repertoire, but a selection based on the performance environment. If there is light and you can see the picture, use reparation. However, in the evening, it was impossible to see it dark (when there was no electricity in the past). The use of the visual method as a tool in this method is to confirm the transition to a visual culture that is a step further from the level of culture. Moreover, unlike the epic narrative, the power of the implied image provided an opportunity for viewers to experience the mystery of Buddhism through emotional stimulation.

Introduction of the Best Practices in the Pakistan Gulpur HEPP (파키스탄 Gulpur 수력발전 현장의 Best Practices 소개)

  • JANG, Ock Jae;HONG, Won Pyo;CHAE, Hee Moon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.216-217
    • /
    • 2022
  • Gulpur 수력발전 프로젝트는 전력난을 겪고 있는 파키스탄에 102 MW 규모의 수력발전소를 건설하여 30년 동안 운영 관리한 후 파키스탄 정부로 양도하는 IPP(Independent Power Producing) 형식의 투자사업이다. 남동발전과 DL E&C, 롯데건설이 Sponsor로서 출자한 자본금과, ADB, IFC, K-EXIM 등의 대주단로부터의 차입금을 재원으로 하여 소요 사업비를 조달하고 사업을 개발하였다. DL E&C와 롯데건설이 EPC(Engineering, Procurement, Construction)를 수행하였고, 이산이 Design consultant의 역할을 수행하였다. Gulpur 수력발전 프로젝트의 발전형식은 수로식(run-of-river)으로 201 m3/s의 발전유량과 102 MW의 발전 시설용량을 이용하여 연평균예상발전량은 398 GWh이다. 주요 구조물로는 설계 재현빈도 1년의 유수전환시설(가물막이댐 & 가배수터널)과 콘크리트 중력식댐(H 67 m, L 205 m), 도수터널(D 6.7 m, L 215 m, 2기), 옥외형 발전소 (H 51 m, W 60 m, L 38 m, Kaplan 2기)가 있으며, 2015년 10월 착공하여 2020년 3월 상업발전을 시작하였다. 본 프로젝트는 DL E&C의 첫 번째 EPC 해외수력발전 프로젝트이다. 따라서 프로젝트의 성공적 수행을 위한 경제적 설계, 시공의 효율성 및 안정성 확보 등을 위하여 많은 연구를 수행하는 과정에서 다양한 기술 개선을 이룰 수 있었다. 본고에서는 Gulpur 프로젝트를 통하여 도출된 성공 사례들을 소개 및 공유하고자 한다. 첫 번째로 콘크리트 중력식댐 시공을 위한 유수전환시설의 최적 설계빈도를 산정하였다. 일반적으로 유수전환시설의 규모는 설계기준에 제시된 설계 재현빈도를 이용하는데, 해외 설계기준에서는 10년, 국내 설계기준에서는 1~2년으로 다르게 제시되어 있는 문제점이 있다. 유수전환시설의 규모는 프로젝트의 경제성에 큰 영향을 미치기 때문에 최적 설계빈도의 결정이 필요하며, 위험도분석기법(Risk Analysis)과 기대화폐가치법(Expected Monetary Value)을 이용하여 유수전환시설의 최적 설계 재현빈도와 이에 영향을 미치는 인자를 분석하였다. 위험도는 몬테카를로 시뮬레이션으로 산정된 가물막이댐 파괴확률과 재현빈도를 이용하여 산정된 가물막이댐 월류확률을 고려하였으며, 비용 및 피해액으로는 유수전환시설의 공사비, 가물막이댐 파괴시의 재건설비용과 지체보상금, 가물막이댐 월류시의 복구비용을 고려하였다. 이에 대한 연구결과로, 유수전환시설의 사용기간과 월류시의 복구비용이 유수전환시설의 설계 재현기간 결정에 가장 큰 영향을 미치는 것으로 나타났고, 특히 월류시의 복구비용이 작을수록 낮은 설계 재현빈도를 선택하는 것이 타당한 것으로 나타났다. 예를 들어, 유수전환시설의 사용기간이 3 ~ 5년, 복구비용이 0.5 ~ 1.0 mil USD 이하인 조건에서 가물막이시설의 최적 설계빈도는 1년 ~ 2년인 것으로 나타났다. 또한, 유수전환시설의 사용기간은 본댐의 규모와 시공기간 등을 고려하여 결정되는 사항으로 설계자가 임의 조정할 수 없지만, 복구비용은 시공 관리자에 따라 결정되는 부분으로, 적극적 홍수 피해 저감 및 복구방안을 마련하는 것이 프로젝트의 경제성을 향상시킬 수 있다는 것을 알 수 있었다. 두 번째로 프로젝트의 경제성 향상, 홍수기 댐 시공시의 안전성 확보를 위하여 홍수 조기경보시스템(Early Warning System)을 개발 및 활용하였다. 수로식(Run-of-river) 수력발전댐은 대부분 산악지역에 위치하기 때문에 국지성 강우 및 급한 지형 경사로 인하여 돌발홍수(flash flood)의 발생 가능성이 높다. 따라서 시공 중 홍수(월류) 발생을 미리 감지하고 현장에 전파할 수 있는, 수로식(Run-of-river) 수력발전댐 현장을 위한 홍수 조기경보시스템이 필요하며, 이를 리스크 인식, 모니터링 및 경보, 전파 및 연락, 반응 능력 향상의 4가지 부분으로 나누어 구축하였다. 리스크 인식 부분에서는 가물막이댐 월류 발생 상황에 대한 위험도, 취약성, 리스크를 제시하였으며, 모니터링 및 경보 부분에서는 상류 측정수위에서 유도된 현장 예상수위와 실제 현장 측정 수위를 대상으로 경보홍수위와 위험홍수위로 나누어 관리하였다. 전파 및 연락 부분에서는 현장 시공 조직을 활용하여 홍수시를 대비한 비상연락체계도(Emergency communication flow chart)를 운영하였으며, 반응 능력 향상을 위해 비상연락체계도의 팀별 Action plan을 상세화 하였다. 세 번째로 현장의 지질특성과 50여 차례 발파시험으로 현장 고유의 발파진동감쇄곡선을 도출하였으며, 이를 통해 현장의 시공성과 콘크리트 품질 확보를 동시에 달성할 수 있는 방안을 제시하였다. 콘크리트댐 공사에서는 제한된 공기 내에 공사를 완료하기 위해 사면부 굴착과 콘크리트 타설이 동시에 수행될 수밖에 없는 문제점을 가지고 있다. 그러나 신규 콘크리트 타설면 근처에서 발파를 수행하는 경우 발파로 발생되는 탄성파가 일정 수준을 초과하게 되면, 콘크리트 양생에 영향을 주게 된다. 따라서 다수의 현장 발파시험을 통해 발파거리와 최대진동속도의 상관관계 즉, 발파진동감쇄곡선을 도출함으로써 현장의 발파진동특성을 도출할 수 있었다. 또한, 기존 연구 논문들을 통해 콘크리트 재령기간 별 안전진동속도를 선정하고, 해당 안전진동속도를 초과하지 않는 범위에서 콘크리트 타설면과 발파위치의 거리에 따라 1회 발파 가능한 장약량을 산정하여 적용하였다. 이와 같은 체계적인 접근을 통해 콘크리트 타설과 발파 작업 동시 수행에 대한 논란을 해소할 수 있었다.

  • PDF

The Effect of Brand Extension of Private Label on Consumer Attitude - a focus on the moderating effect of the perceived fit difference between parent brands and an extended brand - (PL의 브랜드확장이 소비자태도에 미치는 영향에 관한 연구 : 모브랜드 적합도 인식 차이의 조절효과를 중심으로)

  • Kim, Jong-Keun;Kim, Hyang-Mi;Lee, Jong-Ho
    • Journal of Distribution Research
    • /
    • v.16 no.4
    • /
    • pp.1-27
    • /
    • 2011
  • Introduction: Sales of private labels(PU have been growing m recent years. Globally, PLs have already achieved 20% share, although between 25 and 50% share in most of the European markets(AC. Nielson, 2005). These products are aimed to have comparable quality and prices as national brand(NB) products and have been continuously eroding manufacturer's national brand market share. Stores have also started introducing premium PLs that are of higher-quality and more reasonably priced compared to NBs. Worldwide, many retailers already have a multiple-tier private label architecture. Consumers as a consequence are now able to have a more diverse brand choice in store than ever before. Since premium PLs are priced higher than regular PLs and even, in some cases, above NBs, stores can expect to generate higher profits. Brand extensions and private label have been extensively studied in the marketing field. However, less attention has been paid to the private label extension. Therefore, this research focuses on private label extension using the Multi-Attribute Attitude Model(Fishbein and Ajzen, 1975). Especially there are few studies that consider the hierarchical effect of the PL's two parent brands: store brand and the original PL. We assume that the attitude toward each of the two parent brands affects the attitude towards the extended PL. The influence from each parent brand toward extended PL will vary according to the perceived fit between each parent brand and the extended PL. This research focuses on how these two parent brands act as reference points to one another in the consumers' choice consideration. Specifically we seek to understand how store image and attitude towards original PL affect consumer perceptions of extended premium PL. How consumers perceive extended premium PLs could provide strategic suggestions for retailer managers with specific suggestions on whether it is more effective: to position extended premium PL similarly or dissimilarly to original PL especially on the quality dimension and congruency with store image. There is an extensive body of research on branding and brand extensions (e.g. Aaker and Keller, 1990) and more recently on PLs(e.g. Kumar and Steenkamp, 2007). However there are no studies to date that look at the upgrading and influence of original PLs and attitude towards store on the premium PL extension. This research wishes to make a contribution to this gap using the perceived fit difference between parent brands and extended premium PL as the context. In order to meet the above objectives, we investigate which factors heighten consumers' positive attitude toward premium PL extension. Research Model and Hypotheses: When considering the attitude towards the premium PL extension, we expect four factors to have an influence: attitude towards store; attitude towards original PL; perceived congruity between the store image and the premium PL; perceived similarity between the original PL and the premium PL. We expect that all these factors have an influence on consumer attitude towards premium PL extension. Figure 1 gives the research model and hypotheses. Method: Data were collected by an intercept survey conducted on consumers at discount stores. 403 survey responses were attained (total 59.8% female, across all age ranges). Respondents were asked to respond to a series of Questions measured on 7 point likert-type scales. The survey consisted of Questions that measured: the trust towards store and the original PL; the satisfaction towards store and the original PL; the attitudes towards store, the original PL, and the extended premium PL; the perceived similarity of the original PL and the extended premium PL; the perceived congruity between the store image and the extended premium PL. Product images with specific explanations of the features of premium PL, regular PL and NB we reused as the stimuli for the Question response. We developed scales to measure the research constructs. Cronbach's alphaw as measured each construct with the reliability for all constructs exceeding the .70 standard(Nunnally, 1978). Results: To test the hypotheses, path analysis was conducted using LISREL 8.30. The path analysis for verification of the model produced satisfactory results. The validity index shows acceptable results(${\chi}^2=427.00$(P=0.00), GFI= .90, AGFI= .87, NFI= .91, RMSEA= .062, RMR= .047). With the increasing retailer use of premium PLBs, the intention of this research was to examine how consumers use original PL and store image as reference points as to the attitude towards premium PL extension. Results(see table 1 & 2) show that the attitude of each parent brand (attitudes toward store and original pL) influences the attitude towards extended PL and their perceived fit moderates these influences. Attitude toward the extended PL was influenced by the relative level of perceived fit. Discussion of results and future direction: These results suggest that the future strategy for the PL extension needs to consider that positive parent brand attitude is more strongly associated with the attitude toward PL extensions. Specifically, to improve attitude towards PL extension, building and maintaining positive attitude towards original PL is necessary. Positioning premium PL congruently to store image is also important for positive attitude. In order to improve this research, the following alternatives should also be considered. To improve the research model's predictive power, more diverse products should be included in study. Other attributes of product should also be included such as design, brand name since we only considered trust and satisfaction as factors to build consumer attitudes.

  • PDF

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF