• Title/Summary/Keyword: performance parameter

Search Result 4,774, Processing Time 0.047 seconds

Effect of Various β-1,3-glucan Supplements on the Performance, Blood Parameter, Small Intestinal Microflora and Immune Response in Laying Hens (β-Glucan 제제들이 산란계의 생산성, 혈액 성상과 소장내 미생물 균총 및 면역 체계에 미치는 영향)

  • Park, K.W.;Rhee, A.R.;Lee, I.Y.;Kim, M.K.;Paik, I.K.
    • Korean Journal of Poultry Science
    • /
    • v.35 no.2
    • /
    • pp.183-190
    • /
    • 2008
  • This study was conducted to investigate the effect of feeding diets supplemented with ${\beta}-glucan$ products on the performance, small intestinal microflora and immune response in laying hens. The ${\beta}-glucan$ products used in the experiment were $BetaPolo^{(R)}$ ; soluble ${\beta}-glucan$ of microbial cell wall origin, $HiGlu^{(R)}$ ; microbial cell wall origin, $OGlu^{(R)}$ ; oat origin, $BGlu^{(R)}$ ; barley origin. A total of 720 Hy-Line Brown laying hens of 40wks old were divided into 5 dietary treatments : T1 ; Control( C), T2 ; $BetaPolo^{(R)}$, T3 ; $HiGlu^{(R)}$, T4 ; $OGlu^{(R)}$, T5 ; $BGlu^{(R)}$. Each treatment was replicated 4 times with 36 birds/replicate housed in 2 bird cages, and arranged according to completely randomized block design. Feeding trial lasted 40ds under 16 h lighting regimens. There were significant differences among treatments in hen-house egg production feed intake and feed conversion. HiGlu treatment was significantly higher than OGlu treatments in hen-house egg production. ${\beta}-glucan$ supplemented treatments were lower than the control in feed intake and feed conversion ratio. All ${\beta}-glucan$ supplemented treatments were significantly higher than the control in eggshell strength. Eggshell color and Haugh unit tended to be lower in the supplemented group than the control. IgY concentration was not significantly affected by treatments. At $5^{th}$ week of experiment, however, IgY concentration tended to increase in the supplemented groups. Among the leucocytes parameters, WBC, heterophil, lymphocytes, monocyte and eosinophil concentration were lower in the supplemented groups than those of the control. Among erythrocytes, HCT(hematocrit) and MCV(mean corpuscular volume) were significantly affected by treatment. MCV of supplemented groups were higher than that of the control. Immunoglobulin concentrations in the birds were not significantly different among treatments. However, IgA concentration tended to be low in the supplemented groups than the control. The cfu of small intestinal microflora were not significantly different among treatments, but that of Cl. perfringens tended to be lower than the control. The result of this experiment indicateted that feeding ${\beta}-glucan$ to laying hens improve feed conversion ratio and eggshell strength. Also intestinal microflora and immune responses are modified.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Relation of Social Security Network, Community Unity and Local Government Trust (지역사회 사회안전망구축과 지역사회결속 및 지방자치단체 신뢰의 관계)

  • Kim, Yeong-Nam;Kim, Chan-Sun
    • Korean Security Journal
    • /
    • no.42
    • /
    • pp.7-36
    • /
    • 2015
  • This study aims at analyzing difference of social Security network, Community unity and local government trust according to socio-demographical features, exploring the relation of social Security network, Community unity and local government trust according to socio-demographical features, presenting results between each variable as a model and verifying the property of mutual ones. This study sampled general citizens in Gwangju for about 15 days Aug. 15 through Aug. 30, 2014, distributed total 450 copies using cluster random sampling, gathered 438 persons, 412 persons of whom were used for analysis. This study verified the validity and credibility of the questionnaire through an experts' meeting, preliminary test, factor analysis and credibility analysis. The credibility of questionnaire was ${\alpha}=.809{\sim}{\alpha}=.890$. The inout data were analyzed by study purpose using SPSSWIN 18.0, as statistical techniques, factor analysis, credibility analysis, correlation analysis, independent sample t verification, ANOVA, multi-regression analysis, path analysis etc. were used. the findings obtained through the above study methods are as follows. First, building a social Security network has an effect on Community institution. That is, the more activated a, the higher awareness on institution. the more activated street CCTV facilities, anti-crime design, local government Security education, the higher the stability. Second, building a social Security network has an effect on trust of local government. That is, the activated local autonomous anti-crime activity, anti-crime design. local government's Security education, police public oder service, the more increased trust of policy, service management, busines performance. Third, Community unity has an effect on trust of local government. That is, the better Community institution is achieved, the higher trust of policy. Also the stabler Community institution, the higher trust of business performance. Fourth, building a social Security network has a direct or indirect effect on Community unity and local government trust. That is, social Security network has a direct effect on trust of local government, but it has a higher effect through Community unity of parameter. Such results showed that Community unity in Gwangju Region is an important factor, which means it is an important variable mediating building a social Security network and trust of local government. To win trust of local residents, we need to prepare for various cultural events and active communication space and build a social Security network for uniting them.

  • PDF

Modern Paper Quality Control

  • Olavi Komppa
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2000.06a
    • /
    • pp.16-23
    • /
    • 2000
  • The increasing functional needs of top-quality printing papers and packaging paperboards, and especially the rapid developments in electronic printing processes and various computer printers during past few years, set new targets and requirements for modern paper quality. Most of these paper grades of today have relatively high filler content, are moderately or heavily calendered , and have many coating layers for the best appearance and performance. In practice, this means that many of the traditional quality assurance methods, mostly designed to measure papers made of pure. native pulp only, can not reliably (or at all) be used to analyze or rank the quality of modern papers. Hence, introduction of new measurement techniques is necessary to assure and further develop the paper quality today and in the future. Paper formation , i.e. small scale (millimeter scale) variation of basis weight, is the most important quality parameter of paper-making due to its influence on practically all the other quality properties of paper. The ideal paper would be completely uniform so that the basis weight of each small point (area) measured would be the same. In practice, of course, this is not possible because there always exists relatively large local variations in paper. However, these small scale basis weight variations are the major reason for many other quality problems, including calender blacking uneven coating result, uneven printing result, etc. The traditionally used visual inspection or optical measurement of the paper does not give us a reliable understanding of the material variations in the paper because in modern paper making process the optical behavior of paper is strongly affected by using e.g. fillers, dye or coating colors. Futhermore, the opacity (optical density) of the paper is changed at different process stages like wet pressing and calendering. The greatest advantage of using beta transmission method to measure paper formation is that it can be very reliably calibrated to measure true basis weight variation of all kinds of paper and board, independently on sample basis weight or paper grade. This gives us the possibility to measure, compare and judge papers made of different raw materials, different color, or even to measure heavily calendered, coated or printed papers. Scientific research of paper physics has shown that the orientation of the top layer (paper surface) fibers of the sheet paly the key role in paper curling and cockling , causing the typical practical problems (paper jam) with modern fax and copy machines, electronic printing , etc. On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard . Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and coclking tendency, and provides the necessary information to finetune, the manufacturing process for optimum quality. many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, bing beyond the measurement range of the traditional instruments or resulting invonveniently long measuring time per sample . The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, nonleaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layer of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow in well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings ! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly ass planned (having even small measurement error or malfunction ), the process control will set the machine to operate away from the optimum , resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

Modified Traditional Calibration Method of CRNP for Improving Soil Moisture Estimation (산악지형에서의 CRNP를 이용한 토양 수분 측정 개선을 위한 새로운 중성자 강도 교정 방법 검증 및 평가)

  • Cho, Seongkeun;Nguyen, Hoang Hai;Jeong, Jaehwan;Oh, Seungcheol;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.665-679
    • /
    • 2019
  • Mesoscale soil moisture measurement from the promising Cosmic-Ray Neutron Probe (CRNP) is expected to bridge the gap between large scale microwave remote sensing and point-based in-situ soil moisture observations. Traditional calibration based on $N_0$ method is used to convert neutron intensity measured at the CRNP to field scale soil moisture. However, the static calibration parameter $N_0$ used in traditional technique is insufficient to quantify long term soil moisture variation and easily influenced by different time-variant factors, contributing to the high uncertainties in CRNP soil moisture product. Consequently, in this study, we proposed a modified traditional calibration method, so-called Dynamic-$N_0$ method, which take into account the temporal variation of $N_0$ to improve the CRNP based soil moisture estimation. In particular, a nonlinear regression method has been developed to directly estimate the time series of $N_0$ data from the corrected neutron intensity. The $N_0$ time series were then reapplied to generate the soil moisture. We evaluated the performance of Dynamic-$N_0$ method for soil moisture estimation compared with the traditional one by using a weighted in-situ soil moisture product. The results indicated that Dynamic-$N_0$ method outperformed the traditional calibration technique, where correlation coefficient increased from 0.70 to 0.72 and RMSE and bias reduced from 0.036 to 0.026 and -0.006 to $-0.001m^3m^{-3}$. Superior performance of the Dynamic-$N_0$ calibration method revealed that the temporal variability of $N_0$ was caused by hydrogen pools surrounding the CRNP. Although several uncertainty sources contributed to the variation of $N_0$ were not fully identified, this proposed calibration method gave a new insight to improve field scale soil moisture estimation from the CRNP.

A review on the design requirement of temperature in high-level nuclear waste disposal system: based on bentonite buffer (고준위폐기물처분시스템 설계 제한온도 설정에 관한 기술현황 분석: 벤토나이트 완충재를 중심으로)

  • Kim, Jin-Seop;Cho, Won-Jin;Park, Seunghun;Kim, Geon-Young;Baik, Min-Hoon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.5
    • /
    • pp.587-609
    • /
    • 2019
  • Short-and long-term stabilities of bentonite, favored material as buffer in geological repositories for high-level waste were reviewed in this paper in addition to alternative design concepts of buffer to mitigate the thermal load from decay heat of SF (Spent Fuel) and further increase the disposal efficiency. It is generally reported that the irreversible changes in structure, hydraulic behavior, and swelling capacity are produced due to temperature increase and vapor flow between $150{\sim}250^{\circ}C$. Provided that the maximum temperature of bentonite is less than $150^{\circ}C$, however, the effects of temperature on the material, structural, and mineralogical stability seems to be minor. The maximum temperature in disposal system will constrain and determine the amount of waste to be disposed per unit area and be regarded as an important design parameter influencing the availability of disposal site. Thus, it is necessary to identify the effects of high temperature on the performance of buffer and allow for the thermal constraint greater than $100^{\circ}C$. In addition, the development of high-performance EBS (Engineered Barrier System) such as composite bentonite buffer mixed with graphite or silica and multi-layered buffer (i.e., highly thermal-conductive layer or insulating layer) should be taken into account to enhance the disposal efficiency in parallel with the development of multilayer repository. This will contribute to increase of reliability and securing the acceptance of the people with regard to a high-level waste disposal.

Performance Evaluation of Reconstruction Algorithms for DMIDR (DMIDR 장치의 재구성 알고리즘 별 성능 평가)

  • Kwak, In-Suk;Lee, Hyuk;Moon, Seung-Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.29-37
    • /
    • 2019
  • Purpose DMIDR(Discovery Molecular Imaging Digital Ready, General Electric Healthcare, USA) is a PET/CT scanner designed to allow application of PSF(Point Spread Function), TOF(Time of Flight) and Q.Clear algorithm. Especially, Q.Clear is a reconstruction algorithm which can overcome the limitation of OSEM(Ordered Subset Expectation Maximization) and reduce the image noise based on voxel unit. The aim of this paper is to evaluate the performance of reconstruction algorithms and optimize the algorithm combination to improve the accurate SUV(Standardized Uptake Value) measurement and lesion detectability. Materials and Methods PET phantom was filled with $^{18}F-FDG$ radioactivity concentration ratio of hot to background was in a ratio of 2:1, 4:1 and 8:1. Scan was performed using the NEMA protocols. Scan data was reconstructed using combination of (1)VPFX(VUE point FX(TOF)), (2)VPHD-S(VUE Point HD+PSF), (3)VPFX-S (TOF+PSF), (4)QCHD-S-400((VUE Point HD+Q.Clear(${\beta}-strength$ 400)+PSF), (5)QCFX-S-400(TOF +Q.Clear(${\beta}-strength$ 400)+PSF), (6)QCHD-S-50(VUE Point HD+Q.Clear(${\beta}-strength$ 50)+PSF) and (7)QCFX-S-50(TOF+Q.Clear(${\beta}-strength$ 50)+PSF). CR(Contrast Recovery) and BV(Background Variability) were compared. Also, SNR(Signal to Noise Ratio) and RC(Recovery Coefficient) of counts and SUV were compared respectively. Results VPFX-S showed the highest CR value in sphere size of 10 and 13 mm, and QCFX-S-50 showed the highest value in spheres greater than 17 mm. In comparison of BV and SNR, QCFX-S-400 and QCHD-S-400 showed good results. The results of SUV measurement were proportional to the H/B ratio. RC for SUV is in inverse proportion to the H/B ratio and QCFX-S-50 showed highest value. In addition, reconstruction algorithm of Q.Clear using 400 of ${\beta}-strength$ showed lower value. Conclusion When higher ${\beta}-strength$ was applied Q.Clear showed better image quality by reducing the noise. On the contrary, lower ${\beta}-strength$ was applied Q.Clear showed that sharpness increase and PVE(Partial Volume Effect) decrease, so it is possible to measure SUV based on high RC comparing to conventional reconstruction conditions. An appropriate choice of these reconstruction algorithm can improve the accuracy and lesion detectability. In this reason, it is necessary to optimize the algorithm parameter according to the purpose.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

Prediction of Life Expectancy for Terminally Ill Cancer Patients Based on Clinical Parameters (말기 암 환자에서 임상변수를 이용한 생존 기간 예측)

  • Yeom, Chang-Hwan;Choi, Youn-Seon;Hong, Young-Seon;Park, Yong-Gyu;Lee, Hye-Ree
    • Journal of Hospice and Palliative Care
    • /
    • v.5 no.2
    • /
    • pp.111-124
    • /
    • 2002
  • Purpose : Although the average life expectancy has increased due to advances in medicine, mortality due to cancer is on an increasing trend. Consequently, the number of terminally ill cancer patients is also on the rise. Predicting the survival period is an important issue in the treatment of terminally ill cancer patients since the choice of treatment would vary significantly by the patents, their families, and physicians according to the expected survival. Therefore, we investigated the prognostic factors for increased mortality risk in terminally ill cancer patients to help treat these patients by predicting the survival period. Methods : We investigated 31 clinical parameters in 157 terminally ill cancer patients admitted to in the Department of Family Medicine, National Health Insurance Corporation Ilsan Hospital between July 1, 2000 and August 31, 2001. We confirmed the patients' survival as of October 31, 2001 based on medical records and personal data. The survival rates and median survival times were estimated by the Kaplan-Meier method and Log-rank test was used to compare the differences between the survival rates according to each clinical parameter. Cox's proportional hazard model was used to determine the most predictive subset from the prognostic factors among many clinical parameters which affect the risk of death. We predicted the mean, median, the first quartile value and third quartile value of the expected lifetimes by Weibull proportional hazard regression model. Results : Out of 157 patients, 79 were male (50.3%). The mean age was $65.1{\pm}13.0$ years in males and was $64.3{\pm}13.7$ years in females. The most prevalent cancer was gastric cancer (36 patients, 22.9%), followed by lung cancer (27, 17.2%), and cervical cancer (20, 12.7%). The survival time decreased with to the following factors; mental change, anorexia, hypotension, poor performance status, leukocytosis, neutrophilia, elevated serum creatinine level, hypoalbuminemia, hyperbilirubinemia, elevated SGPT, prolonged prothrombin time (PT), prolonged activated partial thromboplastin time (aPTT), hyponatremia, and hyperkalemia. Among these factors, poor performance status, neutrophilia, prolonged PT and aPTT were significant prognostic factors of death risk in these patients according to the results of Cox's proportional hazard model. We predicted that the median life expectancy was 3.0 days when all of the above 4 factors were present, $5.7{\sim}8.2$ days when 3 of these 4 factors were present, $11.4{\sim}20.0$ days when 2 of the 4 were present, and $27.9{\sim}40.0$ when 1 of the 4 was present, and 77 days when none of these 4 factors were present. Conclusions : In terminally ill cancer patients, we found that the prognostic factors related to reduced survival time were poor performance status, neutrophilia, prolonged PT and prolonged am. The four prognostic factors enabled the prediction of life expectancy in terminally ill cancer patients.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF