• Title/Summary/Keyword: Product Information System

Search Result 2,240, Processing Time 0.041 seconds

Spatio-temporal enhancement of forest fire risk index using weather forecast and satellite data in South Korea (기상 예보 및 위성 자료를 이용한 우리나라 산불위험지수의 시공간적 고도화)

  • KANG, Yoo-Jin;PARK, Su-min;JANG, Eun-na;IM, Jung-ho;KWON, Chun-Geun;LEE, Suk-Jun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.116-130
    • /
    • 2019
  • In South Korea, forest fire occurrences are increasing in size and duration due to various factors such as the increase in fuel materials and frequent drying conditions in forests. Therefore, it is necessary to minimize the damage caused by forest fires by appropriately providing the probability of forest fire risk. The purpose of this study is to improve the Daily Weather Index(DWI) provided by the current forest fire forecasting system in South Korea. A new Fire Risk Index(FRI) is proposed in this study, which is provided in a 5km grid through the synergistic use of numerical weather forecast data, satellite-based drought indices, and forest fire-prone areas. The FRI is calculated based on the product of the Fine Fuel Moisture Code(FFMC) optimized for Korea, an integrated drought index, and spatio-temporal weighting approaches. In order to improve the temporal accuracy of forest fire risk, monthly weights were applied based on the forest fire occurrences by month. Similarly, spatial weights were applied using the forest fire density information to improve the spatial accuracy of forest fire risk. In the time series analysis of the number of monthly forest fires and the FRI, the relationship between the two were well simulated. In addition, it was possible to provide more spatially detailed information on forest fire risk when using FRI in the 5km grid than DWI based on administrative units. The research findings from this study can help make appropriate decisions before and after forest fire occurrences.

Sentiment analysis on movie review through building modified sentiment dictionary by movie genre (영역별 맞춤형 감성사전 구축을 통한 영화리뷰 감성분석)

  • Lee, Sang Hoon;Cui, Jing;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.97-113
    • /
    • 2016
  • Due to the growth of internet data and the rapid development of internet technology, "big data" analysis is actively conducted to analyze enormous data for various purposes. Especially in recent years, a number of studies have been performed on the applications of text mining techniques in order to overcome the limitations of existing structured data analysis. Various studies on sentiment analysis, the part of text mining techniques, are actively studied to score opinions based on the distribution of polarity of words in documents. Usually, the sentiment analysis uses sentiment dictionary contains positivity and negativity of vocabularies. As a part of such studies, this study tries to construct sentiment dictionary which is customized to specific data domain. Using a common sentiment dictionary for sentiment analysis without considering data domain characteristic cannot reflect contextual expression only used in the specific data domain. So, we can expect using a modified sentiment dictionary customized to data domain can lead the improvement of sentiment analysis efficiency. Therefore, this study aims to suggest a way to construct customized dictionary to reflect characteristics of data domain. Especially, in this study, movie review data are divided by genre and construct genre-customized dictionaries. The performance of customized dictionary in sentiment analysis is compared with a common sentiment dictionary. In this study, IMDb data are chosen as the subject of analysis, and movie reviews are categorized by genre. Six genres in IMDb, 'action', 'animation', 'comedy', 'drama', 'horror', and 'sci-fi' are selected. Five highest ranking movies and five lowest ranking movies per genre are selected as training data set and two years' movie data from 2012 September 2012 to June 2014 are collected as test data set. Using SO-PMI (Semantic Orientation from Point-wise Mutual Information) technique, we build customized sentiment dictionary per genre and compare prediction accuracy on review rating. As a result of the analysis, the prediction using customized dictionaries improves prediction accuracy. The performance improvement is 2.82% in overall and is statistical significant. Especially, the customized dictionary on 'sci-fi' leads the highest accuracy improvement among six genres. Even though this study shows the usefulness of customized dictionaries in sentiment analysis, further studies are required to generalize the results. In this study, we only consider adjectives as additional terms in customized sentiment dictionary. Other part of text such as verb and adverb can be considered to improve sentiment analysis performance. Also, we need to apply customized sentiment dictionary to other domain such as product reviews.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.

The Effect of Consumers' Value Motives on the Perception of Blog Reviews Credibility: the Moderation Effect of Tie Strength (소비자의 가치 추구 동인이 블로그 리뷰의 신뢰성 지각에 미치는 영향: 유대강도에 따른 조절효과를 중심으로)

  • Chu, Wujin;Roh, Min Jung
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.159-189
    • /
    • 2012
  • What attracts consumers to bloggers' reviews? Consumers would be attracted both by the Bloggers' expertise (i.e., knowledge and experience) and by his/her unbiased manner of delivering information. Expertise and trustworthiness are both virtues of information sources, particularly when there is uncertainty in decision-making. Noting this point, we postulate that consumers' motives determine the relative weights they place on expertise and trustworthiness. In addition, our hypotheses assume that tie strength moderates consumers' expectation on bloggers' expertise and trustworthiness: with expectation on expertise enhanced for power-blog user-group (weak-ties), and an expectation on trustworthiness elevated for personal-blog user-group (strong-ties). Finally, we theorize that the effect of credibility on willingness to accept a review is moderated by tie strength; the predictive power of credibility is more prominent for the personal-blog user-groups than for the power-blog user groups. To support these assumptions, we conducted a field survey with blog users, collecting retrospective self-report data. The "gourmet shop" was chosen as a target product category, and obtained data analyzed by structural equations modeling. Findings from these data provide empirical support for our theoretical predictions. First, we found that the purposive motive aimed at satisfying instrumental information needs increases reliance on bloggers' expertise, but interpersonal connectivity value for alleviating loneliness elevates reliance on bloggers' trustworthiness. Second, expertise-based credibility is more prominent for power-blog user-groups than for personal-blog user-groups. While strong ties attract consumers with trustworthiness based on close emotional bonds, weak ties gain consumers' attention with new, non-redundant information (Levin & Cross, 2004). Thus, when the existing knowledge system, used in strong ties, does not work as smoothly for addressing an impending problem, the weak-tie source can be utilized as a handy reference. Thus, we can anticipate that power bloggers secure credibility by virtue of their expertise while personal bloggers trade off on their trustworthiness. Our analysis demonstrates that power bloggers appeal more strongly to consumers than do personal bloggers in the area of expertise-based credibility. Finally, the effect of review credibility on willingness to accept a review is higher for the personal-blog user-group than for the power-blog user-group. Actually, the inference that review credibility is a potent predictor of assessing willingness to accept a review is grounded on the analogy that attitude is an effective indicator of purchase intention. However, if memory about established attitudes is blocked, the predictive power of attitude on purchase intention is considerably diminished. Likewise, the effect of credibility on willingness to accept a review can be affected by certain moderators. Inspired by this analogy, we introduced tie strength as a possible moderator and demonstrated that tie strength moderated the effect of credibility on willingness to accept a review. Previously, Levin and Cross (2004) showed that credibility mediates strong-ties through receipt of knowledge, but this credibility mediation is not observed for weak-ties, where a direct path to it is activated. Thus, the predictive power of credibility on behavioral intention - that is, willingness to accept a review - is expected to be higher for strong-ties.

  • PDF

Empirical Analysis on Bitcoin Price Change by Consumer, Industry and Macro-Economy Variables (비트코인 가격 변화에 관한 실증분석: 소비자, 산업, 그리고 거시변수를 중심으로)

  • Lee, Junsik;Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.195-220
    • /
    • 2018
  • In this study, we conducted an empirical analysis of the factors that affect the change of Bitcoin Closing Price. Previous studies have focused on the security of the block chain system, the economic ripple effects caused by the cryptocurrency, legal implications and the acceptance to consumer about cryptocurrency. In various area, cryptocurrency was studied and many researcher and people including government, regardless of country, try to utilize cryptocurrency and applicate to its technology. Despite of rapid and dramatic change of cryptocurrencies' price and growth of its effects, empirical study of the factors affecting the price change of cryptocurrency was lack. There were only a few limited studies, business reports and short working paper. Therefore, it is necessary to determine what factors effect on the change of closing Bitcoin price. For analysis, hypotheses were constructed from three dimensions of consumer, industry, and macroeconomics for analysis, and time series data were collected for variables of each dimension. Consumer variables consist of search traffic of Bitcoin, search traffic of bitcoin ban, search traffic of ransomware and search traffic of war. Industry variables were composed GPU vendors' stock price and memory vendors' stock price. Macro-economy variables were contemplated such as U.S. dollar index futures, FOMC policy interest rates, WTI crude oil price. Using above variables, we did times series regression analysis to find relationship between those variables and change of Bitcoin Closing Price. Before the regression analysis to confirm the relationship between change of Bitcoin Closing Price and the other variables, we performed the Unit-root test to verifying the stationary of time series data to avoid spurious regression. Then, using a stationary data, we did the regression analysis. As a result of the analysis, we found that the change of Bitcoin Closing Price has negative effects with search traffic of 'Bitcoin Ban' and US dollar index futures, while change of GPU vendors' stock price and change of WTI crude oil price showed positive effects. In case of 'Bitcoin Ban', it is directly determining the maintenance or abolition of Bitcoin trade, that's why consumer reacted sensitively and effected on change of Bitcoin Closing Price. GPU is raw material of Bitcoin mining. Generally, increasing of companies' stock price means the growth of the sales of those companies' products and services. GPU's demands increases are indirectly reflected to the GPU vendors' stock price. Making an interpretation, a rise in prices of GPU has put a crimp on the mining of Bitcoin. Consequently, GPU vendors' stock price effects on change of Bitcoin Closing Price. And we confirmed U.S. dollar index futures moved in the opposite direction with change of Bitcoin Closing Price. It moved like Gold. Gold was considered as a safe asset to consumers and it means consumer think that Bitcoin is a safe asset. On the other hand, WTI oil price went Bitcoin Closing Price's way. It implies that Bitcoin are regarded to investment asset like raw materials market's product. The variables that were not significant in the analysis were search traffic of bitcoin, search traffic of ransomware, search traffic of war, memory vendor's stock price, FOMC policy interest rates. In search traffic of bitcoin, we judged that interest in Bitcoin did not lead to purchase of Bitcoin. It means search traffic of Bitcoin didn't reflect all of Bitcoin's demand. So, it implies there are some factors that regulate and mediate the Bitcoin purchase. In search traffic of ransomware, it is hard to say concern of ransomware determined the whole Bitcoin demand. Because only a few people damaged by ransomware and the percentage of hackers requiring Bitcoins was low. Also, its information security problem is events not continuous issues. Search traffic of war was not significant. Like stock market, generally it has negative in relation to war, but exceptional case like Gulf war, it moves stakeholders' profits and environment. We think that this is the same case. In memory vendor stock price, this is because memory vendors' flagship products were not VRAM which is essential for Bitcoin supply. In FOMC policy interest rates, when the interest rate is low, the surplus capital is invested in securities such as stocks. But Bitcoin' price fluctuation was large so it is not recognized as an attractive commodity to the consumers. In addition, unlike the stock market, Bitcoin doesn't have any safety policy such as Circuit breakers and Sidecar. Through this study, we verified what factors effect on change of Bitcoin Closing Price, and interpreted why such change happened. In addition, establishing the characteristics of Bitcoin as a safe asset and investment asset, we provide a guide how consumer, financial institution and government organization approach to the cryptocurrency. Moreover, corroborating the factors affecting change of Bitcoin Closing Price, researcher will get some clue and qualification which factors have to be considered in hereafter cryptocurrency study.

COATED PARTICLE FUEL FOR HIGH TEMPERATURE GAS COOLED REACTORS

  • Verfondern, Karl;Nabielek, Heinz;Kendall, James M.
    • Nuclear Engineering and Technology
    • /
    • v.39 no.5
    • /
    • pp.603-616
    • /
    • 2007
  • Roy Huddle, having invented the coated particle in Harwell 1957, stated in the early 1970s that we know now everything about particles and coatings and should be going over to deal with other problems. This was on the occasion of the Dragon fuel performance information meeting London 1973: How wrong a genius be! It took until 1978 that really good particles were made in Germany, then during the Japanese HTTR production in the 1990s and finally the Chinese 2000-2001 campaign for HTR-10. Here, we present a review of history and present status. Today, good fuel is measured by different standards from the seventies: where $9*10^{-4}$ initial free heavy metal fraction was typical for early AVR carbide fuel and $3*10^{-4}$ initial free heavy metal fraction was acceptable for oxide fuel in THTR, we insist on values more than an order of magnitude below this value today. Half a percent of particle failure at the end-of-irradiation, another ancient standard, is not even acceptable today, even for the most severe accidents. While legislation and licensing has not changed, one of the reasons we insist on these improvements is the preference for passive systems rather than active controls of earlier times. After renewed HTGR interest, we are reporting about the start of new or reactivated coated particle work in several parts of the world, considering the aspects of designs/ traditional and new materials, manufacturing technologies/ quality control quality assurance, irradiation and accident performance, modeling and performance predictions, and fuel cycle aspects and spent fuel treatment. In very general terms, the coated particle should be strong, reliable, retentive, and affordable. These properties have to be quantified and will be eventually optimized for a specific application system. Results obtained so far indicate that the same particle can be used for steam cycle applications with $700-750^{\circ}C$ helium coolant gas exit, for gas turbine applications at $850-900^{\circ}C$ and for process heat/hydrogen generation applications with $950^{\circ}C$ outlet temperatures. There is a clear set of standards for modem high quality fuel in terms of low levels of heavy metal contamination, manufacture-induced particle defects during fuel body and fuel element making, irradiation/accident induced particle failures and limits on fission product release from intact particles. While gas-cooled reactor design is still open-ended with blocks for the prismatic and spherical fuel elements for the pebble-bed design, there is near worldwide agreement on high quality fuel: a $500{\mu}m$ diameter $UO_2$ kernel of 10% enrichment is surrounded by a $100{\mu}m$ thick sacrificial buffer layer to be followed by a dense inner pyrocarbon layer, a high quality silicon carbide layer of $35{\mu}m$ thickness and theoretical density and another outer pyrocarbon layer. Good performance has been demonstrated both under operational and under accident conditions, i.e. to 10% FIMA and maximum $1600^{\circ}C$ afterwards. And it is the wide-ranging demonstration experience that makes this particle superior. Recommendations are made for further work: 1. Generation of data for presently manufactured materials, e.g. SiC strength and strength distribution, PyC creep and shrinkage and many more material data sets. 2. Renewed start of irradiation and accident testing of modem coated particle fuel. 3. Analysis of existing and newly created data with a view to demonstrate satisfactory performance at burnups beyond 10% FIMA and complete fission product retention even in accidents that go beyond $1600^{\circ}C$ for a short period of time. This work should proceed at both national and international level.

Analysis of Massive Transfusion Blood Product Use in a Tertiary Care Hospital (일개 3차 의료기관의 대량수혈 혈액 사용 분석)

  • Lim, Young Ae;Jung, Kyoungwon;Lee, John Cook-Jong
    • The Korean Journal of Blood Transfusion
    • /
    • v.29 no.3
    • /
    • pp.253-261
    • /
    • 2018
  • Background: A massive blood transfusion (MT) requires significant efforts by the Blood Bank. This study examined blood product use in MT and emergency O Rh Positive red cells (O RBCs) available directly for emergency patients from the Trauma Center in Ajou University Hospital. Methods: MT was defined as a transfusion of 10 or more RBCs within 24 hours. The extracted data for the total RBCs, fresh frozen plasma (FFP), platelets (PLTs, single donor platelets (SDP) and random platelet concentrates (PC)) issued from Blood Bank between March 2016 and November 2017 from Hospital Information System were reviewed. SDP was considered equivalent to 6 units of PC. Results: A total of 345 MTs, and 6233/53268 (11.7%) RBCs, 4717/19376 (24.3%) FFP, and 4473/94166 (4.8%) PLTs were used in MT (P<0.001). For the RBC products in MT and non-MT transfusions, 28.0% and 34.1% were group A; 27.1% and 26.0% were group B; 37.3% and 29.7% were group O, and 7.5% and 10.2% were group AB (P<0.001). The ratios of RBC:FFP:PLT use were 1:0.76:0.72 in MT and 1:0.31:1.91 in non-MT (P<0.001). A total of 461 O RBCs were used in 36.2% (125/345) of MT cases and the number of O RBCs transfused per patient ranged from 1 to 18. Conclusion: RBCs with the O blood group are most used for MT. Ongoing education of clinicians to minimize the overuse of emergency O RBCs in MT is required. A procedure to have thawed plasma readily available in MT appears to be of importance because FFP was used frequently in MT.

Research Direction for Functional Foods Safety (건강기능식품 안전관리 연구방향)

  • Jung, Ki-Hwa
    • Journal of Food Hygiene and Safety
    • /
    • v.25 no.4
    • /
    • pp.410-417
    • /
    • 2010
  • Various functional foods, marketing health and functional effects, have been distributed in the market. These products, being in forms of foods, tablets, and capsules, are likely to be mistaken as drugs. In addition, non-experts may sell these as foods, or use these for therapy. Efforts for creating health food regulations or building regulatory system for improving the current status of functional foods have been made, but these have not been communicated to consumers yet. As a result, problems of circulating functional foods for therapy or adding illegal medical to such products have persisted, which has become worse by internet media. The cause of this problem can be categorized into (1) product itself and (2) its use, but in either case, one possible cause is lack of communications with consumers. Potential problems that can be caused by functional foods include illegal substances, hazardous substances, allergic reactions, considerations when administered to patients, drug interactions, ingredients with purity or concentrations too low to be detected, products with metabolic activations, health risks from over- or under-dose of vitamin and minerals, and products with alkaloids. (Journal of Health Science, 56, Supplement (2010)). The reason why side effects related to functional foods have been increasing is that under-qualified functional food companies are exaggerating the functionality for marketing purposes. KFDA has been informing consumers, through its web pages, to address the above mentioned issues related to functional foods, but there still is room for improvement, to promote proper use of functional foods and avoid drug interactions. Specifically, to address these issues, institutionalizing to collect information on approved products and their side effects, settling reevaluation systems, and standardizing preclinical tests and clinical tests are becoming urgent. Also to provide crucial information, unified database systems, seamlessly aggregating heterogeneous data in different domains, with user interfaces enabling effective one-stop search, are crucial.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.