• Title/Summary/Keyword: Graph analysis

Search Result 1,208, Processing Time 0.033 seconds

The Application of Operations Research to Librarianship : Some Research Directions (운영연구(OR)의 도서관응용 -그 몇가지 잠재적응용분야에 대하여-)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.4
    • /
    • pp.43-71
    • /
    • 1975
  • Operations research has developed rapidly since its origins in World War II. Practitioners of O. R. have contributed to almost every aspect of government and business. More recently, a number of operations researchers have turned their attention to library and information systems, and the author believes that significant research has resulted. It is the purpose of this essay to introduce the library audience to some of these accomplishments, to present some of the author's hypotheses on the subject of library management to which he belives O. R. has great potential, and to suggest some future research directions. Some problem areas in librianship where O. R. may play a part have been discussed and are summarized below. (1) Library location. It is usually necessary to make balance between accessibility and cost In location problems. Many mathematical methods are available for identifying the optimal locations once the balance between these two criteria has been decided. The major difficulties lie in relating cost to size and in taking future change into account when discriminating possible solutions. (2) Planning new facilities. Standard approaches to using mathematical models for simple investment decisions are well established. If the problem is one of choosing the most economical way of achieving a certain objective, one may compare th althenatives by using one of the discounted cash flow techniques. In other situations it may be necessary to use of cost-benefit approach. (3) Allocating library resources. In order to allocate the resources to best advantage the librarian needs to know how the effectiveness of the services he offers depends on the way he puts his resources. The O. R. approach to the problems is to construct a model representing effectiveness as a mathematical function of levels of different inputs(e.g., numbers of people in different jobs, acquisitions of different types, physical resources). (4) Long term planning. Resource allocation problems are generally concerned with up to one and a half years ahead. The longer term certainly offers both greater freedom of action and greater uncertainty. Thus it is difficult to generalize about long term planning problems. In other fields, however, O. R. has made a significant contribution to long range planning and it is likely to have one to make in librarianship as well. (5) Public relations. It is generally accepted that actual and potential users are too ignorant both of the range of library services provided and of how to make use of them. How should services be brought to the attention of potential users? The answer seems to lie in obtaining empirical evidence by controlled experiments in which a group of libraries participated. (6) Acquisition policy. In comparing alternative policies for acquisition of materials one needs to know the implications of each service which depends on the stock. Second is the relative importance to be ascribed to each service for each class of user. By reducing the level of the first, formal models will allow the librarian to concentrate his attention upon the value judgements which will be necessary for the second. (7) Loan policy. The approach to choosing between loan policies is much the same as the previous approach. (8) Manpower planning. For large library systems one should consider constructing models which will permit the skills necessary in the future with predictions of the skills that will be available, so as to allow informed decisions. (9) Management information system for libraries. A great deal of data can be available in libraries as a by-product of all recording activities. It is particularly tempting when procedures are computerized to make summary statistics available as a management information system. The values of information to particular decisions that may have to be taken future is best assessed in terms of a model of the relevant problem. (10) Management gaming. One of the most common uses of a management game is as a means of developing staff's to take decisions. The value of such exercises depends upon the validity of the computerized model. If the model were sufficiently simple to take the form of a mathematical equation, decision-makers would probably able to learn adequately from a graph. More complex situations require simulation models. (11) Diagnostics tools. Libraries are sufficiently complex systems that it would be useful to have available simple means of telling whether performance could be regarded as satisfactory which, if it could not, would also provide pointers to what was wrong. (12) Data banks. It would appear to be worth considering establishing a bank for certain types of data. It certain items on questionnaires were to take a standard form, a greater pool of data would de available for various analysis. (13) Effectiveness measures. The meaning of a library performance measure is not readily interpreted. Each measure must itself be assessed in relation to the corresponding measures for earlier periods of time and a standard measure that may be a corresponding measure in another library, the 'norm', the 'best practice', or user expectations.

  • PDF

Changed in Feeding Behavior of Cacopsylla pyricola Foerster (Hemiptera: Psyllidae) and Activities of Several Insecticides (몇 가지 약제처리에 대한 꼬마배나무이(Cacopsylla pyricola Foerster)의 섭식행동 변화 및 살충효과)

  • Park, Min-Woo;Kwon, Hay-Ri;Yu, Yong-Man;Youn, Young-Nam
    • The Korean Journal of Pesticide Science
    • /
    • v.20 no.1
    • /
    • pp.72-81
    • /
    • 2016
  • Feeding behaviors of the pear psylla, Cacopsylla pyricola, and their changing feeding behaviors were recorded and analyzed with an electrical penetration graph (EPG) analysis against 5 insecticides. And their mortality against insecticides were carried out in the laboratory. General feeding behavior patterns of C. pyricola were changed by insecticide treatments. Especially, the type and frequency of waveforms differently occurred depending on a sort of insecticides treated. Total duration of transition to waveform PE1 and phloem ingestion (waveform PE2) were significantly different between treatment and non-treatment of insecticides. When 5 different insecticides were treated on pear leaves, difference of feeding patterns were recorded. In case of treatment of benfuracarb, total duration of non-probes (waveform Np) was appeared higher than any other insecticide. However, when flonicamid and deltamethrin were treatment, total duration of stylet penetration (waveform PA) and xylem ingestion (waveform PG) were appeared higher than other insecticide, respectively. As results feeding behaviour of C. pyricola after treated insecticides with time-based consumed rate of C. pyricola, the rate of non-probe (waveform Np) was longer than start penetration (waveform PA), penetration and ingestion in parenchyma cells (waveform PC1+PC2), ingestion at phloem (waveform PD+PE1+PE2) and xylem (waveform PG). As result of direct spray treatment to C. pyricola, mortality of C. pyricola against imidacloprid was higher than any other insecticide on 24 hours after treatment. However, all of insecticides showed 100% mortality of after 48 hours. On the other hand, when 5 insecticides sprayed on the pear leaves and then C. pyricola located on the treated leaves, benfuracarb showed the most toxicity against C. pyricola among insecticides. These result was consistent with the EPG results that showed relatively longer total duration time of waveform Np (non-probes) by benfuracarb treatment.

Quantitative Analysis of GBCA Reaction by Mol Concentration Change on MRI Sequence (MRI sequence에 따른 GBCA 몰농도별 반응에 대한 정량적 분석)

  • Jeong, Hyun Keun;Jeong, Hyun Do;Kim, Ho Chul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.2
    • /
    • pp.182-192
    • /
    • 2015
  • In this paper, we introduce how to change the reaction rate as mol concentration when we scan enhanced MRI with GBCA(Gadolinium Based Contrast Agent), Also show the changing patterns depending on diverse MRI sequences which are made by different physical principle. For this study, we made MRI phantom ourselves. We mixed 500 mmol Gadoteridol with Saline in each 28 different containers from 500 to 0 mmol. After that, MR phantom was scanned by physically different MRI sequences which are T1 SE, T2 FLAIR, T1 FLAIR, 3D FLASH, T1 3D SPACE and 3D SPCIR in 1.5T bore. The results were as follows : *T1 Spin echo's Total SI(Signal Intensity) was 15608.7, Max peak was 1352.6 in 1 mmol. *T2 FLAIR's Total SI was 9106.4, Max peak was 0.4 1721.6 in 1 mmol. *T1 FLAIR's Total SI was 20972.5, Max peak was 1604.9 in 1 mmol. *3D FLASH's Total SI was 20924.0, Max peak was 1425.7 in 40 mmol. *3D SPACE 1mm's Total SI was 6399.0, Max peak was 528.3 in 3 mmol. *3D SPACE 5mm's Total SI was 6276.5, Max peak was 514.6 in 2 mmol. *3D SPCIR's Total SI was 1778.8, Max peak was 383.8 in 0.4 mmol. In most sequences, High signal intensity was shown in diluted lower concentration rather than high concentration, And also graph's max peak and pattern had difference value according to the each different sequence. Through this paper which have quantitative result of GBCA's reaction rate depending on sequence, We expect that practical enhanced MR protocol can be performed in clinical field.

The Influence of Iteration and Subset on True X Method in F-18-FPCIT Brain Imaging (F-18-FPCIP 뇌 영상에서 True-X 재구성 기법을 기반으로 했을 때의 Iteration과 Subset의 영향)

  • Choi, Jae-Min;Kim, Kyung-Sik;NamGung, Chang-Kyeong;Nam, Ki-Pyo;Im, Ki-Cheon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.122-126
    • /
    • 2010
  • Purpose: F-18-FPCIT that shows strong familiarity with DAT located at a neural terminal site offers diagnostic information about DAT density state in the region of the striatum especially Parkinson's disease. In this study, we altered the iteration and subset and measured SUV${\pm}$SD and Contrasts from phantom images which set up to specific iteration and subset. So, we are going to suggest the appropriate range of the iteration and subset. Materials and Methods: This study has been performed with 10 normal volunteers who don't have any history of Parkinson's disease or cerebral disease and Flangeless Esser PET Phantom from Data Spectrum Corporation. $5.3{\pm}0.2$ mCi of F-18-FPCIT was injected to the normal group and PET Phantom was assembled by ACR PET Phantom Instructions and it's actual ratio between hot spheres and background was 2.35 to 1. Brain and Phantom images were acquired after 3 hours from the time of the injection and images were acquired for ten minutes. Basically, SIEMENS Bio graph 40 True-point was used and True-X method was applied for image reconstruction method. The iteration and Subset were set to 2 iterations, 8 subsets, 3 iterations, 16 subsets, 6 iterations, 16 subsets, 8 iterations, 16 subsets and 8 iterations, 21 subsets respectively. To measure SUVs on the brain images, ROIs were drawn on the right Putamen. Also, Coefficient of variance (CV) was calculated to indicate the uniformity at each iteration and subset combinations. On the phantom study, we measured the actual ratio between hot spheres and back ground at each combinations. Same size's ROIs were drawn on the same slide and location. Results: Mean SUVs were 10.60, 12.83, 13.87, 13.98 and 13.5 at each combination. The range of fluctuation by sets were 22.36%, 10.34%, 1.1%, and 4.8% respectively. The range of fluctuation of mean SUV was lowest between 6 iterations 16 subsets and 8 iterations 16 subsets. CV showed 9.07%, 11.46%, 13.56%, 14.91% and 19.47% respectively. This means that the numerical value of the iteration and subset gets higher the image's uniformity gets worse. The range of fluctuation of CV by sets were 2.39, 2.1, 1.35, and 4.56. The range of fluctuation of uniformity was lowest between 6 iterations, 16 subsets and 8 iterations, 16 subsets. In the contrast test, it showed 1.92:1, 2.12:1, 2.10:1, 2.13:1 and 2.11:1 at each iteration and subset combinations. A Setting of 8 iterations and 16 subsets reappeared most close ratio between hot spheres and background. Conclusion: Findings on this study, SUVs and uniformity might be calculated differently caused by variable reconstruction parameters like filter or FWHM. Mean SUV and uniformity showed the lowest range of fluctuation at 6 iterations 16 subsets and 8 iterations 16 subsets. Also, 8 iterations 16 subsets showed the nearest hot sphere to background ratio compared with others. But it can not be concluded that only 6 iterations 16 subsets and 8 iterations 16 subsets can make right images for the clinical diagnosis. There might be more factors that can make better images. For more exact clinical diagnosis through the quantitative analysis of DAT density in the region of striatum we need to secure healthy people's quantitative values.

  • PDF

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Analysis of Mistakes in Photosynthesis Unit in Biology II Textbooks and Survey of Biology Teachers' Recognition on them (생물 II 교과서 광합성 단원의 오류 분석 및 생물 교사의 오류 인지 조사)

  • Park, Hae-Kyung;Yoon, Ki-Soon;Kwon, Duck-Kee
    • Journal of Science Education
    • /
    • v.32 no.1
    • /
    • pp.33-46
    • /
    • 2008
  • The purpose of this study was to know whether or not any wrongful description or simple errors were in photosynthesis unit of Biology II textbook under 7th national curriculum and if so, to know whether or not high school teachers recognized and corrected properly the mistakes. The mistakes in photosynthesis unit of text books were determined by the comparison with several reference books and through examination by three plant physiologists in 8 different Biology II textbooks. After the mistakes were analysed, the survey using contents of textbook containing the mistakes was conducted on high school teachers teaching Biology II. As a result, 48 mistakes were determined in 13 subjects. As many as four mistakes were found even in one subject in a certain textbook and a same mistake was found repeatedly in several textbooks. The survey result showed that the teachers who pointed exactly the mistakes out corrected properly, however, the percentage of these ones out of 35 teachers replied to survey was less than 50%. The ratios of correction out of total number of responses were high in question #6 (43%), #4-3 (40%), and #1-2 (32%) which were containing a simple mistake in graph, a wrong word and a wrong picture, respectively. But, no one pointed out and made correction in question #5-1 and #5-2 which were containing Z scheme of light reaction without the legend of vertical axis that should be explained as electron energy or standard reduction potential. The result indicates the possibility that the mistakes in photosynthesis unit of Biology II textbook can be corrected and teached properly by teachers may be low. In order to reduce the possibility that students may have misconceptions about photosynthesis, the list of print's errors should be provided to the teachers and/or the training program and/or workshop for in-service high school biology teachers was recommended.

  • PDF

The Evaluation of Resolution Recovery Based Reconstruction Method, Astonish (Resolution Recovery 기반의 Astonish 영상 재구성 기법의 평가)

  • Seung, Jong-Min;Lee, Hyeong-Jin;Kim, Jin-Eui;Kim, Hyun-Joo;Kim, Joong-Hyun;Lee, Jae-Sung;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.58-64
    • /
    • 2011
  • Objective: The 3-dimensional reconstruction method with resolution recovery modeling has advantages of high spatial resolution and contrast because of its precise modeling of spatial blurring according to the distance from detector plane. The aim of this study was to evaluate one of the resolution recovery reconstruction methods (Astonish, Philips Medical), compare it to other iterative reconstructions, and verify its clinical usefulness. Materials and Methods: NEMA IEC PET body phantom and Flanges Jaszczak ECT phantom (Data Spectrum Corp., USA) studies were performed using Skylight SPECT (Philips) system under four different conditions; short or long (2 times of short) radius, and half or full (40 kcts/frame) acquisition counts. Astonish reconstruction method was compared with two other iterative reconstructions; MLEM and 3D-OSEM which vendor supplied. For quantitative analysis, the contrast ratios obtained from IEC phantom test were compared. Reconstruction parameters were determined by optimization study using graph of contrast ratio versus background variability. The qualitative comparison was performed with Jaszczak ECT phantom and human myocardial data. Results: The overall contrast ratio was higher with Astonish than the others. For the largest hot sphere of 37 mm diameter, Astonish showed about 27.1% and 17.4% higher contrast ratio than MLEM and 3D-OSEM, in short radius study. For long radius, Astonish showed about 40.5% and 32.6% higher contrast ratio than MLEM and 3D-OSEM. The effect of acquired counts was insignificant. In the qualitative studies with Jaszczak phantom and human myocardial data, Astonish showed the best image quality. Conclusion: In this study, we have found out that Astonish can provide more reliable clinical results by better image quality compared to other iterative reconstruction methods. Although further clinical studies are required, Astonish would be used in clinics with confidence for enhancement of images.

  • PDF

Effect of the Dose Reduction Applied Low Dose for PET/CT According to CT Attenuation Correction Method (PET/CT 저선량 적용 시 CT 감쇠보정법에 따른 피폭선량 저감효과)

  • Jung, Seung Woo;Kim, Hong Kyun;Kwon, Jae Beom;Park, Sung Wook;Kim, Myeong Jun;Sin, Yeong Man;Kim, Yeong Heon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.127-133
    • /
    • 2014
  • Purpose: Low dose of PET/CT is important because of Patient's X-ray exposure. The aim of this study was to evaluate the effectiveness of low-dose PET/ CT image through the CTAC and QAC of patient study and phantom study. Materials and Methods: We used the discovery 710 PET/CT (GE). We used the NEMA IEC body phantom for evaluating the PET data corrected by ultra-low dose CT attenuation correction method and NU2-94 phantom for uniformity. After injection of 70.78 MBq and 22.2 MBq of 18 F-FDG were done to each of phantom, PET/CT scans were obtained. PET data were reconstructed by using of CTAC of which dose was for the diagnosis CT and Q. AC of which was only for attenuation correction. Quantitative analysis was performed by use of horizontal profile and vertical profile. Reference data which were corrected by CTAC were compared to PET data which was corrected by the ultra-low dose. The relative error was assessed. Patients with over weighted and normal weight also underwent a PET/CT scans according to low dose protocol and standard dose protocol. Relative error and signal to noise ratio of SUV were analyzed. Results: In the results of phantom test, phantom PET data were corrected by CTAC and Q.AC and they were compared each other. The relative error of Q.AC profile was been calculated, and it was shown in graph. In patient studies, PET data for overweight patient and normal weight patient were reconstructed by CTAC and Q.AC under routine dose and ultra-low dose. When routine dose was used, the relative error was small. When high dose was used, the result of overweight patient was effectively corrected by Q.AC. Conclusion: In phantom study, CTAC method with 80 kVp and 10 mA was resulted in bead hardening artifact. PET data corrected by ultra- low dose CTAC was not quantified, but those by the same dose were quantified properly. In patients' cases, PET data of over weighted patient could be quantified by Q.AC method. Its relative difference was not significant. Q.AC method was proper attenuation correction method when ultra-low dose was used. As a result, it is expected that Q.AC is a good method in order to reduce patient's exposure dose.

  • PDF

Design of a Bit-Serial Divider in GF(2$^{m}$ ) for Elliptic Curve Cryptosystem (타원곡선 암호시스템을 위한 GF(2$^{m}$ )상의 비트-시리얼 나눗셈기 설계)

  • 김창훈;홍춘표;김남식;권순학
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.12C
    • /
    • pp.1288-1298
    • /
    • 2002
  • To implement elliptic curve cryptosystem in GF(2$\^$m/) at high speed, a fast divider is required. Although bit-parallel architecture is well suited for high speed division operations, elliptic curve cryptosystem requires large m(at least 163) to support a sufficient security. In other words, since the bit-parallel architecture has an area complexity of 0(m$\^$m/), it is not suited for this application. In this paper, we propose a new serial-in serial-out systolic array for computing division operations in GF(2$\^$m/) using the standard basis representation. Based on a modified version of tile binary extended greatest common divisor algorithm, we obtain a new data dependence graph and design an efficient bit-serial systolic divider. The proposed divider has 0(m) time complexity and 0(m) area complexity. If input data come in continuously, the proposed divider can produce division results at a rate of one per m clock cycles, after an initial delay of 5m-2 cycles. Analysis shows that the proposed divider provides a significant reduction in both chip area and computational delay time compared to previously proposed systolic dividers with the same I/O format. Since the proposed divider can perform division operations at high speed with the reduced chip area, it is well suited for division circuit of elliptic curve cryptosystem. Furthermore, since the proposed architecture does not restrict the choice of irreducible polynomial, and has a unidirectional data flow and regularity, it provides a high flexibility and scalability with respect to the field size m.

A Semantic Classification Model for e-Catalogs (전자 카탈로그를 위한 의미적 분류 모형)

  • Kim Dongkyu;Lee Sang-goo;Chun Jonghoon;Choi Dong-Hoon
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.102-116
    • /
    • 2006
  • Electronic catalogs (or e-catalogs) hold information about the goods and services offered or requested by the participants, and consequently, form the basis of an e-commerce transaction. Catalog management is complicated by a number of factors and product classification is at the core of these issues. Classification hierarchy is used for spend analysis, custom3 regulation, and product identification. Classification is the foundation on which product databases are designed, and plays a central role in almost all aspects of management and use of product information. However, product classification has received little formal treatment in terms of underlying model, operations, and semantics. We believe that the lack of a logical model for classification Introduces a number of problems not only for the classification itself but also for the product database in general. It needs to meet diverse user views to support efficient and convenient use of product information. It needs to be changed and evolved very often without breaking consistency in the cases of introduction of new products, extinction of existing products, class reorganization, and class specialization. It also needs to be merged and mapped with other classification schemes without information loss when B2B transactions occur. For these requirements, a classification scheme should be so dynamic that it takes in them within right time and cost. The existing classification schemes widely used today such as UNSPSC and eClass, however, have a lot of limitations to meet these requirements for dynamic features of classification. In this paper, we try to understand what it means to classify products and present how best to represent classification schemes so as to capture the semantics behind the classifications and facilitate mappings between them. Product information implies a plenty of semantics such as class attributes like material, time, place, etc., and integrity constraints. In this paper, we analyze the dynamic features of product databases and the limitation of existing code based classification schemes. And describe the semantic classification model, which satisfies the requirements for dynamic features oi product databases. It provides a means to explicitly and formally express more semantics for product classes and organizes class relationships into a graph. We believe the model proposed in this paper satisfies the requirements and challenges that have been raised by previous works.