• Title/Summary/Keyword: analytical study

Search Result 8,934, Processing Time 0.037 seconds

An Analysis of the Internal Marketing Impact on the Market Capitalization Fluctuation Rate based on the Online Company Reviews from Jobplanet (직원을 위한 내부마케팅이 기업의 시가 총액 변동률에 미치는 영향 분석: 잡플래닛 기업 리뷰를 중심으로)

  • Kichul Choi;Sang-Yong Tom Lee
    • Information Systems Review
    • /
    • v.20 no.2
    • /
    • pp.39-62
    • /
    • 2018
  • Thanks to the growth of computing power and the recent development of data analytics, researchers have started to work on the data produced by users through the Internet or social media. This study is in line with these recent research trends and attempts to adopt data analytical techniques. We focus on the impact of "internal marketing" factors on firm performance, which is typically studied through survey methodologies. We looked into the job review platform Jobplanet (www.jobplanet.co.kr), which is a website where employees and former employees anonymously review companies and their management. With web crawling processes, we collected over 40K data points and performed morphological analysis to classify employees' reviews for internal marketing data. We then implemented econometric analysis to see the relationship between internal marketing and market capitalization. Contrary to the findings of extant survey studies, internal marketing is positively related to a firm's market capitalization only within a limited area. In most of the areas, the relationships are negative. Particularly, female-friendly environment and human resource development (HRD) are the areas exhibiting positive relations with market capitalization in the manufacturing industry. In the service industry, most of the areas, such as employ welfare and work-life balance, are negatively related with market capitalization. When firm size is small (or the history is short), female-friendly environment positively affect firm performance. On the contrary, when firm size is big (or the history is long), most of the internal marketing factors are either negative or insignificant. We explain the theoretical contributions and managerial implications with these results.

Estimation of Genetic Parameters for Litter Size and Sex Ratio in Yorkshire and Landrace Pigs (요크셔종과 랜드레이스종의 산자수 및 성비에 대한 유전모수 추정)

  • Lee, Kyung-Soo;Kim, Jong-Bok;Lee, Jeong-Koo
    • Journal of Animal Science and Technology
    • /
    • v.52 no.5
    • /
    • pp.349-356
    • /
    • 2010
  • This study was conducted to estimate heritabilities, repeatabilities and rank correlation coefficients among breeding values for litter size and sex ratio of Yorkshire and Landrace pigs using various single trait animal models. The analyses were carried out the data comprising 26,390 litters of Yorkshire and 26,173 litters of Landrace collected from the year 1998 to 2008 at a private swine breeding farm located in central part of Korea. Five different analytical models were used for genetic parameter estimation. Model 1 was most simple basic model fitted with year-month contemporary group fixed effect, random additive genetic effect and random residual effect. Model 2 was similar to the model 1 but permanent maternal environmental effect added as random effect, and model 3 was similar with the model 2 but linear and quadratic effects of sow age were added as fixed covariate effect. Model 4 was similar as model 2 except that the parity was added as fixed effect and model 5 was similar to model 3 or model 4 but covariate of sow age was nested within parity effect. The results obtained in this study are summarized as follows: The means and standard error of total number of pigs born per litter (TNB) and number of pigs born alive per litter (NBA) were $11.35{\pm}0.02$ and $10.04{\pm}0.02$ for Yorkshire, $10.97{\pm}0.02$ and $9.98{\pm}0.02$ for Landrace, respectively. The sex ratio (percentage of female per litter) was $45.75{\pm}0.11%$ and $45.75{\pm}0.11%$ for Yorkshire and Landrace, respectively. The heritability estimates of TNB (0.243) and NBA (0.192) from model 1 tended to be higher than those from any other models in both breeds. Differences in heritability and repeatability for TNB were not large among models 3, 4 and 5 and same tendency of negligible differences among estimates by models 3, 4 and 5 were observed for NBA, where heritability and repeatability ranged from 0.096 to 0.099 and from 0.188 to 0.193, respectively, in Yorkshire; and ranged from 0.092 to 0.098 and from 0.193 and 0.196, respectively, in Landrace. The heritability estimates for sex ratio were close to zero which was ranged from 0.002 to 0.003 for TNB and from 0.001 to 0.003 for NBA over the models applied. The rank correlation coefficients of breeding values by model 1 with those from other models (model 2, 3, 4 and 5), and breeding values by model 2 with those from other models (model 1, 3, 4 and 5) were highly positive but lower than the coefficients among breeding values by model 3, model 4 and model 5 which were high of 0.99, approximately, for TNB and NBA of both breeds.

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.

Health Assessment of the Nakdong River Basin Aquatic Ecosystems Utilizing GIS and Spatial Statistics (GIS 및 공간통계를 활용한 낙동강 유역 수생태계의 건강성 평가)

  • JO, Myung-Hee;SIM, Jun-Seok;LEE, Jae-An;JANG, Sung-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.2
    • /
    • pp.174-189
    • /
    • 2015
  • The objective of this study was to reconstruct spatial information using the results of the investigation and evaluation of the health of the living organisms, habitat, and water quality at the investigation points for the aquatic ecosystem health of the Nakdong River basin, to support the rational decision making of the aquatic ecosystem preservation and restoration policies of the Nakdong River basin using spatial analysis techniques, and to present efficient management methods. To analyze the aquatic ecosystem health of the Nakdong River basin, punctiform data were constructed based on the position information of each point with the aquatic ecosystem health investigation and evaluation results of 250 investigation sections. To apply the spatial analysis technique, the data need to be reconstructed into areal data. For this purpose, spatial influence and trends were analyzed using the Kriging interpolation(ArcGIS 10.1, Geostatistical Analysis), and were reconstructed into areal data. To analyze the spatial distribution characteristics of the Nakdong River basin health based on these analytical results, hotspot(Getis-Ord Gi, $G^*_i$), LISA(Local Indicator of Spatial Association), and standard deviational ellipse analyses were used. The hotspot analysis results showed that the hotspot basins of the biotic indices(TDI, BMI, FAI) were the Andong Dam upstream, Wangpicheon, and the Imha Dam basin, and that the health grades of their biotic indices were good. The coldspot basins were Nakdong River Namhae, the Nakdong River mouth, and the Suyeong River basin. The LISA analysis results showed that the exceptional areas were Gahwacheon, the Hapcheon Dam, and the Yeong River upstream basin. These areas had high bio-health indices, but their surrounding basins were low and required management for aquatic ecosystem health. The hotspot basins of the physicochemical factor(BOD) were the Nakdong River downstream basin, Suyeong River, Hoeya River, and the Nakdong River Namhae basin, whereas the coldspot basins were the upstream basins of the Nakdong River tributaries, including Andong Dam, Imha Dam, and Yeong River. The hotspots of the habitat and riverside environment factor(HRI) were different from the hotspots and coldspots of each factor in the LISA analysis results. In general, the habitat and riverside environment of the Nakdong River mainstream and tributaries, including the Nakdong river upstream, Andong Dam, Imha Dam, and the Hapcheon Dam basin, had good health. The coldspot basins of the habitat and riverside environment also showed low health indices of the biotic indices and physicochemical factors, thus requiring management of the habitat and riverside environment. As a result of the time-series analysis with a standard deviation ellipsoid, the areas with good aquatic ecosystem health of the organisms, habitat, and riverside environment showed a tendency to move northward, and the BOD results showed different directions and concentrations by the year of investigation. These aquatic ecosystem health analysis results can provide not only the health management information for each investigation spot but also information for managing the aquatic ecosystem in the catchment unit for the working research staff as well as for the water environment researchers in the future, based on spatial information.

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

An Analytical Study on Stem Growth of Chamaecyparis obtusa (편백(扁栢)의 수간성장(樹幹成長)에 관(關)한 해석적(解析的) 연구(硏究))

  • An, Jong Man;Lee, Kwang Nam
    • Journal of Korean Society of Forest Science
    • /
    • v.77 no.4
    • /
    • pp.429-444
    • /
    • 1988
  • Considering the recent trent toward the development of multiple-use of forest trees, investigations for comprehensive information on these young stands of Hinoki cypress are necessary for rational forest management. From this point of view, 83 sample trees were selected and cut down from 23-ear old stands of Hinoki cypress at Changsung-gun, Chonnam-do. Various stem growth factors of felled trees were measured and canonical correlaton analysis, principal component analysis and factor analysis were applied to investigate the stem growth characteristics, relationships among stem growth factors, and to get potential information and comprehensive information. The results are as follows ; Canonical correlation coefficient between stem volume and quality growth factor was 0.9877. Coefficient of canonical variates showed that DBH among diameter growth factors and height among height growth factors had important effects on stem volume. From the analysis of relationship between stem-volume and canonical variates, which were linearly combined DBH with height as one set, DBH had greater influence on volume growth than height. The 1st-2nd principal components here adopted to fit the effective value of 85% from the pincipal component analysis for 12 stem growth factors. The result showed that the 1st-2nd principal component had cumulative contribution rate of 88.10%. The 1st and the 2nd principal components were interpreted as "size factor" and "shape factor", respectively. From summed proportion of the efficient principal component fur each variate, information of variates except crown diameter, clear length and form height explained more than 87%. Two common factors were set by the eigen value obtained from SMC (squared multiple correlation) of diagonal elements of canonical matrix. There were 2 latent factors, $f_1$ and $f_2$. The former way interpreted as nature of diameter growth system. In inherent phenomenon of 12 growth factor, communalities except clear length and crown diameter had great explanatory poorer of 78.62-98.30%. Eighty three sample trees could he classified into 5 stem types as follows ; medium type within a radius of ${\pm}1$ standard deviation of factor scores, uniformity type in diameter and height growth in the 1st quadrant, slim type in the 2nd quadrant, dwarfish type in the 3rd quadrant, and fall-holed type in the 4 th quadrant.

  • PDF

The Effect of PET/CT Images on SUV with the Correction of CT Image by Using Contrast Media (PET/CT 영상에서 조영제를 이용한 CT 영상의 보정(Correction)에 따른 표준화섭취계수(SUV)의 영향)

  • Ahn, Sha-Ron;Park, Hoon-Hee;Park, Min-Soo;Lee, Seung-Jae;Oh, Shin-Hyun;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.77-81
    • /
    • 2009
  • Purpose: The PET of the PET/CT (Positron Emission Tomography/Computed Tomography) quantitatively shows the biological and chemical information of the body, but has limitation of presenting the clear anatomic structure. Thus combining the PET with CT, it is not only possible to offer the higher resolution but also effectively shorten the scanning time and reduce the noises by using CT data in attenuation correction. And because, at the CT scanning, the contrast media makes it easy to determine a exact range of the lesion and distinguish the normal organs, there is a certain increase in the use of it. However, in the case of using the contrast media, it affects semi-quantitative measures of the PET/CT images. In this study, therefore, we will be to establish the reliability of the SUV (Standardized Uptake Value) with CT data correction so that it can help more accurate diagnosis. Materials and Methods: In this experiment, a total of 30 people are targeted - age range: from 27 to 72, average age : 49.6 - and DSTe (General Electric Healthcare, Milwaukee, MI, USA) is used for equipment. $^{18}F$- FDG 370~555 MBq is injected into the subjects depending on their weight and, after about 60 minutes of their stable position, a whole-body scan is taken. The CT scan is set to 140 kV and 210 mA, and the injected amount of the contrast media is 2 cc per 1 kg of the patients' weight. With the raw data from the scan, we obtain a image showing the effect of the contrast media through the attenuation correction by both of the corrected and uncorrected CT data. Then we mark out ROI (Region of Interest) in each area to measure SUV and analyze the difference. Results: According to the analysis, the SUV is decreased in the liver and heart which have more bloodstream than the others, because of the contrast media correction. On the other hand, there is no difference in the lungs. Conclusions: Whereas the CT scan images with the contrast media from the PET/CT increase the contrast of the targeted region for the test so that it can improve efficiency of diagnosis, there occurred an increase of SUV, a semi-quantitative analytical method. In this research, we measure the variation of SUV through the correction of the influence of contrast media and compare the differences. As we revise the SUV which is increasing in the image with attenuation correction by using contrast media, we can expect anatomical images of high-resolution. Furthermore, it is considered that through this trusted semi-quantitative method, it will definitely enhance the diagnostic value.

  • PDF

Quantitative Analysis of Carbohydrate, Protein, and Oil Contents of Korean Foods Using Near-Infrared Reflectance Spectroscopy (근적외 분광분석법을 이용한 국내 유통 식품 함유 탄수화물, 단백질 및 지방의 정량 분석)

  • Song, Lee-Seul;Kim, Young-Hak;Kim, Gi-Ppeum;Ahn, Kyung-Geun;Hwang, Young-Sun;Kang, In-Kyu;Yoon, Sung-Won;Lee, Junsoo;Shin, Ki-Yong;Lee, Woo-Young;Cho, Young Sook;Choung, Myoung-Gun
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.43 no.3
    • /
    • pp.425-430
    • /
    • 2014
  • Foods contain various nutrients such as carbohydrates, protein, oil, vitamins, and minerals. Among them, carbohydrates, protein, and oil are the main constituents of foods. Usually, these constituents are analyzed by the Kjeldahl and Soxhlet method and so on. However, these analytical methods are complex, costly, and time-consuming. Thus, this study aimed to rapidly and effectively analyze carbohydrate, protein, and oil contents with near-infrared reflectance spectroscopy (NIRS). A total of 517 food samples were measured within the wavelength range of 400 to 2,500 nm. Exactly 412 food calibration samples and 162 validation samples were used for NIRS equation development and validation, respectively. In the NIRS equation of carbohydrates, the most accurate equation was obtained under 1, 4, 5, 1 (1st derivative, 4 nm gap, 5 points smoothing, and 1 point second smoothing) math treatment conditions using the weighted MSC (multiplicative scatter correction) scatter correction method with MPLS (modified partial least square) regression. In the case of protein and oil, the best equation were obtained under 2, 5, 5, 3 and 1, 1, 1, 1 conditions, respectively, using standard MSC and standard normal variate only scatter correction methods with MPLS regression. Calibrations of these NIRS equations showed a very high coefficient of determination in calibration ($R^2$: carbohydrates, 0.971; protein, 0.974; oil, 0.937) and low standard error of calibration (carbohydrates, 4.066; protein, 1.080; oil, 1.890). Optimal equation conditions were applied to a validation set of 162 samples. Validation results of these NIRS equations showed a very high coefficient of determination in prediction ($r^2$: carbohydrates, 0.987; protein, 0.970; oil, 0.947) and low standard error of prediction (carbohydrates, 2.515; protein, 1.144; oil, 1.370). Therefore, these NIRS equations can be applicable for determination of carbohydrates, proteins, and oil contents in various foods.

Effects of Dietary Fats and Oils On the Growth and Serum Cholesterol Content of Rats and Chicks (섭취(攝取) 지방(脂肪)의 종류(種類)가 흰쥐와 병아리의 성장(成長) 및 혈청(血淸) Cholesterol 함량(含量)에 미치는 영향(影響))

  • Park, Kiw-Rye;Han, In-Kyu
    • Journal of Nutrition and Health
    • /
    • v.9 no.2
    • /
    • pp.59-67
    • /
    • 1976
  • A series of experiment was carried out to study the effect of commonly used dietary fat or oils on the growth, feed efficiency, nutrient utilizability, nitrogen retention and serum cholesterol of rats and chicks fed various fat or oils at the level of 10% during 12 weeks of experimentation. Fat and oils used in this experiment were also analyzed for the composition of some fatty acids. The main observations made are as follows: 1. All groups received fat or oils gained more body weight than unsupplemented control group except chicks fed fish oil and rapeseed oil although no statistical significance was found between treatments. It was found that body weight gain achieved by the rats fed soybean oil, rapeseed oil, animal fat or corn oil was much greater than other group and that achieved by the chicks fed corn oil and animal fat was greater than other vegetable oil groups, although no statistical significance was found among treatments. 2. Feed intake data indicated that corn oil group of both rats and chicks consumed considerably more feed than other groups. Whereas feed intake of fish oil groups was the lowest among the experimental animals indicating that fish oil might contain unfavorable compound that depresses the palatability. In feed efficiency, soybean oil group of rats and corn oil group of chicks were significantly better than other experimental groups. In general, addition of fat or oils in the diet improved feed effeciency of diet. 3. Nutrient utiIizabiIity and nitrogen retention data showed that fat in the experimental diet containing 10% fat or oils was absorbed better than crude fat in control diet. It was also found that there was no significant difference in nitrogen retention among treatment. 4. Liver fat content of rapeseed oil group was much higher than that of control group and other group. It was also noticed that feeding more polyunsaturated fatty acids resulted in higher content of Iiver fat. 5. Present data indicated that serum cholesterol content of rapeseed oil and sesame oil group of rat was the higher than that of control group. Serum cholesterol content of animal fat group of chicks was higher than other group. It was interesting to note that serum cholesterol content of chicken was higher than that of rats?regardless of the kind of oils received. 6. Analytical data revealed that fatty acid composition of vegetable oil was composed mainly of oleic acid and linoleic acid, whereas animal fat and fish oil were composed of saturated fatty acid such as, myristic and palmitic acid. It should be mentionted that the perilla oil contained a very large amount of linolenic acid (58.4%) comparing with that in order vegetable oils. Little arachidonic acid was detected in vegetable oil, whereas none in animal fat and. fish oil.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF