• Title/Summary/Keyword: Dimensional Parameter

Search Result 1,129, Processing Time 0.026 seconds

A Computed Tomography-Based Anatomic Comparison of Three Different Types of C7 Posterior Fixation Techniques : Pedicle, Intralaminar, and Lateral Mass Screws

  • Jang, Woo-Young;Kim, Il-Sup;Lee, Ho-Jin;Sung, Jae-Hoon;Lee, Sang-Won;Hong, Jae-Taek
    • Journal of Korean Neurosurgical Society
    • /
    • v.50 no.3
    • /
    • pp.166-172
    • /
    • 2011
  • Objective : The intralaminar screw (ILS) fixation technique offers an alternative to pedicle screw (PS) and lateral mass screw (LMS) fixation in the C7 spine. Although cadaveric studies have described the anatomy of the pedicles, laminae, and lateral masses at C7, 3-dimensional computed tomography (CT) imaging is the modality of choice for pre-surgical planning. In this study, the goal was to determine the anatomical parameter and optimal screw trajectory for ILS placement at C7, and to compare this information to PS and LMS placement in the C7 spine as determined by CT evaluation. Methods : A total of 120 patients (60 men and 60 women) with an average age of $51.7{\pm}13.6$ years were selected by retrospective review of a trauma registry database over a 2-year period. Patients were included in the study if they were older than 15 years of age, had standardized axial bone-window CT imaging at C7, and had no evidence of spinal trauma. For each lamina and pedicle, width (outer cortical and inner cancellous), maximal screw length, and optimal screw trajectory were measured, and the maximal screw length of the lateral mass were measured using m-view 5.4 software. Statistical analysis was performed using Student's t-test. Results : At C7, the maximal PS length was significantly greater than the ILS and LMS length (PS, $33.9{\pm}3.1$ mm; ILS, $30.8{\pm}3.1$ mm; LMS, $10.6{\pm}1.3$; p<0.01). When the outer cortical and inner cancellous width was compared between the pedicle and lamina, the mean pedicle outer cortical width at C7 was wider than the lamina by an average of 0.6 mm (pedicle, $6.8{\pm}1.2$ mm; lamina, $6.2{\pm}1.2$ mm; p<0.01). At C7, 95.8% of the laminae measured accepted a 4.0-mm screw with a 1.0 mm of clearance, compared with 99.2% of pedicle. Of the laminae measured, 99.2% accepted a 3.5-mm screw with a 1.0 mm clearance, compared with 100% of the pedicle. When the outer cortical and inner cancellous height was compared between pedicle and lamina, the mean lamina outer cortical height at C7 was wider than the pedicle by an average of 9.9 mm (lamina, $18.6{\pm}2.0$ mm; pedicle, $8.7{\pm}1.3$ mm; p<0.01). The ideal screw trajectory at C7 was also measured ($47.8{\pm}4.8^{\circ}$ for ILS and $35.1{\pm}8.1^{\circ}$ for PS). Conclusion : Although pedicle screw fixation is the most ideal instrumentation method for C7 fixation with respect to length and cortical diameter, anatomical aspect of C7 lamina is affordable to place screw. Therefore, the C7 intralaminar screw could be an alternative fixation technique with few anatomic limitations in the cases when C7 pedicle screw fixation is not favorable. However, anatomical variations in the length and width must be considered when placing an intralaminar or pedicle screw at C7.

Dual Codec Based Joint Bit Rate Control Scheme for Terrestrial Stereoscopic 3DTV Broadcast (지상파 스테레오스코픽 3DTV 방송을 위한 이종 부호화기 기반 합동 비트율 제어 연구)

  • Chang, Yong-Jun;Kim, Mun-Churl
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.216-225
    • /
    • 2011
  • Following the proliferation of three-dimensional video contents and displays, many terrestrial broadcasting companies have been preparing for stereoscopic 3DTV service. In terrestrial stereoscopic broadcast, it is a difficult task to code and transmit two video sequences while sustaining as high quality as 2DTV broadcast due to the limited bandwidth defined by the existing digital TV standards such as ATSC. Thus, a terrestrial 3DTV broadcasting with a heterogeneous video codec system, where the left image and right images are based on MPEG-2 and H.264/AVC, respectively, is considered in order to achieve both high quality broadcasting service and compatibility for the existing 2DTV viewers. Without significant change in the current terrestrial broadcasting systems, we propose a joint rate control scheme for stereoscopic 3DTV service based on the heterogeneous dual codec systems. The proposed joint rate control scheme applies to the MPEG-2 encoder a quadratic rate-quantization model which is adopted in the H.264/AVC. Then the controller is designed for the sum of the left and right bitstreams to meet the bandwidth requirement of broadcasting standards while the sum of image distortions is minimized by adjusting quantization parameter obtained from the proposed optimization scheme. Besides, we consider a condition on maintaining quality difference between the left and right images around a desired level in the optimization in order to mitigate negative effects on human visual system. Experimental results demonstrate that the proposed bit rate control scheme outperforms the rate control method where each video coding standard uses its own bit rate control algorithm independently in terms of the increase in PSNR by 2.02%, the decrease in the average absolute quality difference by 77.6% and the reduction in the variance of the quality difference by 74.38%.

THEORETICAL STUDY ON OBSERVED COLOR-MAGNITUDE DIAGRAMS

  • Lee, See-Woo
    • Journal of The Korean Astronomical Society
    • /
    • v.12 no.1
    • /
    • pp.41-70
    • /
    • 1979
  • From $B\ddot{o}hm$-Vitense's atmospheric model calculations, the relations, [$T_e$, (B-V)] and [B.C, (B-V)] with respect to heavy element abundance were obtained. Using these relations and evolutionary model calculations of Rood, and Sweigart and Gross, analytic expressions for some physical parameters relating to the C-M diagrams of globular clusters were derived, and they were applied to 21 globular clusters with observed transition periods of RR Lyrae variables. More than 20 different parameters were examined for each globular cluster. The derived ranges of some basic parameters are as follows; $Y=0.21{\sim}0.33,\;Z=1.5{\times}10^{-4}{\sim}4.5{\times}10^{-3},\;age,\;t=9.5{\sim}19{\times}10^9$ years, mass for red giants, $m_{RG}=0.74m_{\odot}{\sim}0.91m_{\odot}$, mass for RR Lyrae stars, $m_{RR}=0.59m_{\odot}{\sim}0.75m_{\odot}$, the visual magnitude difference between the turnoff point and the horizontal branch (HB), ${\Delta}V_{to}=3.1{\sim}3.4(<{\Delta}V_{to}>=3.32)$, the color of the blue edge of RR Lyrae gap, $(B-V)_{BE}=0.17{\sim}0.21=(<(B-V)_{BE}>=0.18),\;[\frac{m}{L}]_{RR}=-1.7{\sim}-1.9$, mass difference of $m_{RR}$ relative to $m_{RG},(m_{RG}-m_{RR})/m_{RG}=0.0{\sim}0.39$. It was found that the ranges of derived parameters agree reasonably well with the observed ones and those estimated by others. Some important results obtained herein can be summarized as follows; (i) There are considerable variations in the initial helium abundance and in age of globular clusters. (ii) The radial gradient of heavy element abundance does exist for globular clusters as shown by Janes for field stars and open clusters. (iii) The helium abundance seems to have been increased with age by massive star evolution after a considerable amount (Y>0.2) of helium had been attained by the Big-Bang nucleosynthesis, but there is not seen a radial gradient of helium abundance. (iv) A considerable amount of heavy elements ($Z{\sim}10{-3}$) might have been formed in the inner halo ($r_{GC}$<10 kpc) from the earliest galactic co1lapse, and then the heavy element abundance has been slowly enriched towards the galactic center and disk, establishing the radial gradient of heavy element abundance. (v) The final galactic disk formation might have taken much longer by about a half of the galactic age than the halo formation, supporting a slow, inhomogeneous co1lapse model of Larson. (vi) Of the three principal parameters controlling the morphology of C-M diagrams, it was found that the first parameter is heavy clement abundance, the second age and the third helium abundance. (vii) The globular clusters can be divided into three different groups, AI, BI and CII according to Z, Y an d age as well as Dickens' HB types. BI group clusters of HB types 4 and 5 like M 3 and NGC 7006 are the oldest and have the lowest helium abundance of the three groups. And also they appear in the inner halo. On the other hand, the youngest AI clusters have the highest Z and Y, and appear in the innermost halo region and in the disk. (viii) From the result of the clean separations of the clusters into three groups, a three dimensional classification with three parameters, Z, Y and age is prsented. (ix) The anomalous C-M diagrams can be expalined in terms of the three principal parameters. That is, the anomaly of NGC 362 and NGC 7006 is accounted for by the smaller age of the order of $1{\sim}2{\times}10^9$ years rather than by the helium abundance difference, compared with M 3. (x) The difference in two Oosterhoff types I and II can be explained in terms of the mean mass difference of RR Lyrae variables rather than in terms of the helium abundance difference as suggested by Stobie. The mean mass of the variables in Oosterhoff type I clusters is smaller by $0.074m_{\odot}$ which is exactly consistent with Rood's estimate. Since it was found that the mean mass of RR Lyrae stars increases with decreasing Z, the two Oosterhoff types can be explained substantially by the metal abundance difference; the type II has Z<$3.4{\times}10^{-4}$, and the type I has higher Z than the type II.

  • PDF

Negative Support Reactions of the Single Span Twin-Steel Box Girder Curved Bridges with Skew Angles (단경간 2련 강박스 거더 곡선교의 사각에 따른 부반력 특성)

  • Park, Chang Min;Lee, Hyung Joon
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.16 no.4
    • /
    • pp.34-43
    • /
    • 2012
  • The behaviors of the curved bridges which has been constructed in the RAMP or Interchange are very complicate and different than orthogonal bridges according to the variations of radius of curvature, skew angle and spacing of shoes. Occasionally, the camber of girder and negative reactions can be occurred due to bending and torsional moment. In this study, the effects on the negative reaction in the curved bridge were investigated on the basis of design variables such as radius of curvature, skew angle, and spacing of shoes. For this study, the twin-steel box girder curved bridge with single span which is applicable for the RAMP bridges with span length(L) of 50.0m and width of 9.0m was chosen and the structural analysis to calculate the reactions was conducted using 3-dimensional equivalent grillage system. The value of negative reaction in curved bridges depends on the plan structures of bridges, the formations of structural systems, and the boundary conditions of bearing, so, radius of curvature, skew angle, and spacing of shoes among of design variables were chosen as the parameter and the load combination according to the design standard were considered. According to the results of numerical analysis, the negative reaction in curved bridge increased with an decrease of radius of curvature, skew angle, and spacing of shoes, respectively. Also, in case of skew angle of $60^{\circ}$ the negative reaction has been always occurred without regard to ${\theta}/B$, and in case of skew angle of $75^{\circ}$ the negative reaction hasn't been occurred in ${\theta}/B$ below 0.27 with the radius of curvature of 180m and in ${\theta}/B$ below 0.32 with the radius of curvature of 250m, and in case of skew angle of $90^{\circ}$ the negative reaction hasn't been occurred in the radius of curvature over 180m and in ${\theta}/B$ below 0.38 with the radius of curvature of 130m, The results from this study indicated that occurrence of negative reaction was related to design variables such as radius of curvature, skew angle, and spacing of shoes, and the problems with the stability including negative reaction will be expected to be solved as taken into consideration of the proper combinations of design variables in design of curved bridge.

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF

Evaluation of Setup Uncertainty on the CTV Dose and Setup Margin Using Monte Carlo Simulation (몬테칼로 전산모사를 이용한 셋업오차가 임상표적체적에 전달되는 선량과 셋업마진에 대하여 미치는 영향 평가)

  • Cho, Il-Sung;Kwark, Jung-Won;Cho, Byung-Chul;Kim, Jong-Hoon;Ahn, Seung-Do;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.81-90
    • /
    • 2012
  • The effect of setup uncertainties on CTV dose and the correlation between setup uncertainties and setup margin were evaluated by Monte Carlo based numerical simulation. Patient specific information of IMRT treatment plan for rectal cancer designed on the VARIAN Eclipse planning system was utilized for the Monte Carlo simulation program including the planned dose distribution and tumor volume information of a rectal cancer patient. The simulation program was developed for the purpose of the study on Linux environment using open source packages, GNU C++ and ROOT data analysis framework. All misalignments of patient setup were assumed to follow the central limit theorem. Thus systematic and random errors were generated according to the gaussian statistics with a given standard deviation as simulation input parameter. After the setup error simulations, the change of dose in CTV volume was analyzed with the simulation result. In order to verify the conventional margin recipe, the correlation between setup error and setup margin was compared with the margin formula developed on three dimensional conformal radiation therapy. The simulation was performed total 2,000 times for each simulation input of systematic and random errors independently. The size of standard deviation for generating patient setup errors was changed from 1 mm to 10 mm with 1 mm step. In case for the systematic error the minimum dose on CTV $D_{min}^{stat{\cdot}}$ was decreased from 100.4 to 72.50% and the mean dose $\bar{D}_{syst{\cdot}}$ was decreased from 100.45% to 97.88%. However the standard deviation of dose distribution in CTV volume was increased from 0.02% to 3.33%. The effect of random error gave the same result of a reduction of mean and minimum dose to CTV volume. It was found that the minimum dose on CTV volume $D_{min}^{rand{\cdot}}$ was reduced from 100.45% to 94.80% and the mean dose to CTV $\bar{D}_{rand{\cdot}}$ was decreased from 100.46% to 97.87%. Like systematic error, the standard deviation of CTV dose ${\Delta}D_{rand}$ was increased from 0.01% to 0.63%. After calculating a size of margin for each systematic and random error the "population ratio" was introduced and applied to verify margin recipe. It was found that the conventional margin formula satisfy margin object on IMRT treatment for rectal cancer. It is considered that the developed Monte-carlo based simulation program might be useful to study for patient setup error and dose coverage in CTV volume due to variations of margin size and setup error.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

The Impact of Market Environments on Optimal Channel Strategy Involving an Internet Channel: A Game Theoretic Approach (시장 환경이 인터넷 경로를 포함한 다중 경로 관리에 미치는 영향에 관한 연구: 게임 이론적 접근방법)

  • Yoo, Weon-Sang
    • Journal of Distribution Research
    • /
    • v.16 no.2
    • /
    • pp.119-138
    • /
    • 2011
  • Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.

    shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
    shows various market conditions captured by the two consumer heterogeneities.
    (a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
    (c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition. summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
    summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.
    illustrates how this happens. When mangers consider the overall impact of the Internet channel, however, they should consider not only channel power, but also sales volume. When both are considered, the introduction of the Internet channel is revealed as more harmful to a physical retailer in Russia than one in Hong Kong, because the sales volume decrease for a physical store due to Internet channel competition is much greater in Russia than in Hong Kong. The results show that manufacturer is always better off with any type of Internet store introduction. The independent physical store benefits from opening its own Internet store when the average travel cost is higher relative to the disutility of using the Internet. Under an opposite market condition, however, the independent physical retailer could be worse off when it opens its own Internet outlet and coordinates both outlets (RI). This is because the low average travel cost significantly reduces the channel power of the independent physical retailer, further aggravating the already weak channel power caused by myopic inter-channel price coordination. The results implies that channel members and policy makers should explicitly consider the factors determining the relative distributions of both kinds of consumer disutility, when they make a channel decision involving an Internet channel. These factors include the suitability of a product for Internet shopping, the level of E-Commerce readiness of a market, and the degree of geographic dispersion of consumers in a market. Despite the academic contributions and managerial implications, this study is limited in the following ways. First, a series of numerical analyses were conducted to derive equilibrium solutions due to the complex forms of demand functions. In the process, we set up V=100, ${\lambda}$=1, and ${\beta}$=0.01. Future research may change this parameter value set to check the generalizability of this study. Second, the five different scenarios for market conditions were analyzed. Future research could try different sets of parameter ranges. Finally, the model setting allows only one monopoly manufacturer in the market. Accommodating competing multiple manufacturers (brands) would generate more realistic results.

  • PDF
  • Memory Organization for a Fuzzy Controller.

    • Jee, K.D.S.;Poluzzi, R.;Russo, B.
      • Proceedings of the Korean Institute of Intelligent Systems Conference
      • /
      • 1993.06a
      • /
      • pp.1041-1043
      • /
      • 1993
    • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.