• Title/Summary/Keyword: All-one polynomial

Search Result 98, Processing Time 0.029 seconds

A FPGA Implementation of BIST Design for the Batch Testing (일괄검사를 위한 BIST 설계의 FPGA 구현)

  • Rhee, Kang-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.7
    • /
    • pp.1900-1906
    • /
    • 1997
  • In this paper, the efficient BILBO(named EBILBO) is designed for BIST that is able to batch the testing when circuit is designed on FPGA. The proposed algorithm of batch testing is able to test the normal operation speed with one-pin-count that can control all part of large and complex circuit. PRTPG is used for the test pattern and MISR is used for PSA. The proposed algorithm of batch testing is VHDL coding on behavioral description, so it is easily modified the model of test pattern generation, signature analysis and compression. The EBILBO's area and the performance of designed BIST are evaluated with ISCAS89 benchmark circuit on FPGA. In circuit with above 600 cells, it is shown that area is reduced below 30%, test pattern is flexibly generated about 500K and the fault coverage is from 88.3% to 100%. EBILBO for the proposed batch testing BIST is able to execute concurrently normal and test mode operation in real time to the number of $s+n+(2^s/2^p-1)$ clock(where, in CUT, # of PI;n, # of register, p is order # of polynomial). The proposed algorithm coded with VHDL is made of library, then it well be widely applied to DFT that satisfy the design and test field on sme time.

  • PDF

Remote Sensing Information Models for Sediment and Soil

  • Ma, Ainai
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.739-744
    • /
    • 2002
  • Recently we have discovered that sediments should be separated from lithosphere, and soil should be separated from biosphere, both sediment and soil will be mixed sediments-soil-sphere (Seso-sphere), which is using particulate mechanics to be solved. Erosion and sediment both are moving by particulate matter with water or wind. But ancient sediments will be erosion same to soil. Nowadays, real soil has already reduced much more. Many places have only remained sediments that have ploughed artificial farming layer. Thus it means sediments-soil-sphere. This paper discusses sediments-soil-sphere erosion modeling. In fact sediments-soil-sphere erosion is including water erosion, wind erosion, melt-water erosion, gravitational water erosion, and mixed erosion. We have established geographical remote sensing information modeling (RSIM) for different erosion that was using remote sensing digital images with geographical ground truth water stations and meteorological observatories data by remote sensing digital images processing and geographical information system (GIS). All of those RSIM will be a geographical multidimensional gray non-linear equation using mathematics equation (non-dimension analysis) and mathematics statistics. The mixed erosion equation is more complex that is a geographical polynomial gray non-linear equation that must use time-space fuzzy condition equations to be solved. RSIM is digital image modeling that has separated physical factors and geographical parameters. There are a lot of geographical analogous criterions that are non-dimensional factor groups. The geographical RSIM could be automatic to change them analogous criterions to be fixed difference scale maps. For example, if smaller scale maps (1:1000 000) that then will be one or two analogous criterions and if larger scale map (1:10 000) that then will be four or five analogous criterions. And the geographical parameters that are including coefficient and indexes will change too with images. The geographical RSIM has higher precision more than mathematics modeling even mathematical equation or mathematical statistics modeling.

  • PDF

Determination of optimal dietary valine concentrations for improved growth performance and innate immunity of juvenile Pacific white shrimp Penaeus vannamei

  • Daehyun Ko;Chorong Lee;Kyeong-Jun Lee
    • Fisheries and Aquatic Sciences
    • /
    • v.27 no.3
    • /
    • pp.171-179
    • /
    • 2024
  • A study was conducted to evaluate dietary valine (Val) requirement for Pacific white shrimp (Penaeus vannamei). Five isonitrogenous (353 g/kg) and isocaloric (4.08 kcal/g) semi-purified diets containing graded levels of Val (2.7, 5.1, 8.7, 12.1 or 16.0 g/kg) were formulated. Quadruplicate groups of 12 shrimp (average body weight: 0.46 ± 0.00 g) were fed one of the experimental diets (2%-5% of total body weight) for 8 weeks. Maximum weight gain was observed in 8.7 g/kg Val group. However, the growth performance was reduced when Val concentration in diets were higher than 12.1 g/kg. Feed conversion ratio was significantly increased with 2.7 and 16.0 g/kg Val inclusion. Shrimp fed the diets containing 2.7 g/kg Val showed significantly lower protein efficiency ratio, whole-body crude protein and Val concentrations. Dietary inclusion of Val significantly improved the relative expression of insulin-like growth factor binding protein and immune-related genes (prophenoloxidase, lysozyme and crustin) in the hepatopancreas and 8.7 g/kg Val group showed highest expression among all the groups. The dietary requirement of Val for maximum growth of juvenile P. vannamei, estimated using polynomial regression analysis on growth, was 9.54 g/kg of Val (27.2 g/kg based on protein level) and maximum growth occurred at 9.27 g/kg of Val (26.2 g/kg based on protein level) based on broken-line regression analysis.

Determination of Electron Beam Output Factors of Individual Applicator for ML-15MDX Linear Accelerator (선형가속기 ML-15MDX의 각 Applicator에 대한 전자선 출력선량 계수 결정)

  • Park, Tae-Jin;Kim, Ok-Bae
    • Progress in Medical Physics
    • /
    • v.5 no.1
    • /
    • pp.87-99
    • /
    • 1994
  • Purpose : The determination of electron beam output factor was investigated from individual applicator for various energy of ML-15MDX linear accelerator. The output factor of electron beam was extended from square to rectangular field in individual applicator size through with a least-square fit to a polynomial expression. Materials : In this experiments. the measurement of output was obtained from 2${\times}$cm$^2$ to 20${\times}$20cm$^2$ of field size in different applicator size for 4 to 15 MaV electron beam energy. The output factor was defined as the ratio of maximum dose output on the central axis of the field of individual applicator size to that of a given field size. Applicator factors were derived from comparing with the output dose of reference field size 10${\times}$10cm$^2$. The thickness of block was specially designed as 10mm in thickness of Lipowitz metal for field shaping in all electron energy. Two types of output curves are included as output factors versus side of square fields and that of variable side length for X and Y in one-dimensional to compare the expected values to that of experiments. Results : Expected output factors of rectangular which was derived from that of square fields in individual applicator size from 2${\times}$2cm$^2$ to 20${\times}$20cm$^2$ in different electron energy was very closed to that of experimental measurements within 2% uncertainty. However 1D method showed a 3% discrepancy in small rectangular field for low energy electron beam. Conclusion : Emperical non-linear polynomial regressions of square root and 1D method were performed to determin the output factor in various field size and electron energy. The expected output of electron beam of square root method for square field and 1D method for rectangular field were very closed to that of measurement in all selected electron beam energy.

  • PDF

The comparisons of three scatter correction methods using Monte Carlo simulation (몬테카를로 시뮬레이션을 이용한 산란보정 방법들에 대한 비교)

  • 봉정균;김희중;이종두;권수일
    • Progress in Medical Physics
    • /
    • v.10 no.2
    • /
    • pp.73-81
    • /
    • 1999
  • Scatter correction for single photon emission computed tomography (SPECT) plays an important role to improve image quality and quantitation. The purpose of this study was to investigate three scatter correction methods using Monte Carlo simulation. Point source and Jaszack phantom filled with Tc-99m were simulated by Monte Carlo code, SIMIND. For scatter correction, we applied three methods, Compton window (CW) method, triple window (TW) method, and dual photopeak window (DPW) method. Point sources located at various depths along the center line within a 20-cm phantom were simulated to calculate the window ratios and corresponding scatter fractions by evaluating the polynomial coefficients for DPW method. Energy windows were located in W$_1$=92-125 keV, W$_2$=124-126 keV, W$_3$=136-140 keV, W$_4$=140-141 keV, and W$_{5}$=154-156 keV. The results showed that in Jaszack phantom with cold sphere and hot sphere, the TW gave the closest contrast and percentage recovery to the ideal image, respectively, while CW overestimated and DPW underestimated the contrast of ideal one. All three scatter correction methods showed an improved image contrast. In conclusion, scatter correction is essential for improving image contrast and accurate quantification. The choice of scatter correction method should be made on the basis of accuracies and ease of implementation.

  • PDF

Efficient Mining of Frequent Subgraph with Connectivity Constraint

  • Moon, Hyun-S.;Lee, Kwang-H.;Lee, Do-Heon
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.267-271
    • /
    • 2005
  • The goal of data mining is to extract new and useful knowledge from large scale datasets. As the amount of available data grows explosively, it became vitally important to develop faster data mining algorithms for various types of data. Recently, an interest in developing data mining algorithms that operate on graphs has been increased. Especially, mining frequent patterns from structured data such as graphs has been concerned by many research groups. A graph is a highly adaptable representation scheme that used in many domains including chemistry, bioinformatics and physics. For example, the chemical structure of a given substance can be modelled by an undirected labelled graph in which each node corresponds to an atom and each edge corresponds to a chemical bond between atoms. Internet can also be modelled as a directed graph in which each node corresponds to an web site and each edge corresponds to a hypertext link between web sites. Notably in bioinformatics area, various kinds of newly discovered data such as gene regulation networks or protein interaction networks could be modelled as graphs. There have been a number of attempts to find useful knowledge from these graph structured data. One of the most powerful analysis tool for graph structured data is frequent subgraph analysis. Recurring patterns in graph data can provide incomparable insights into that graph data. However, to find recurring subgraphs is extremely expensive in computational side. At the core of the problem, there are two computationally challenging problems. 1) Subgraph isomorphism and 2) Enumeration of subgraphs. Problems related to the former are subgraph isomorphism problem (Is graph A contains graph B?) and graph isomorphism problem(Are two graphs A and B the same or not?). Even these simplified versions of the subgraph mining problem are known to be NP-complete or Polymorphism-complete and no polynomial time algorithm has been existed so far. The later is also a difficult problem. We should generate all of 2$^n$ subgraphs if there is no constraint where n is the number of vertices of the input graph. In order to find frequent subgraphs from larger graph database, it is essential to give appropriate constraint to the subgraphs to find. Most of the current approaches are focus on the frequencies of a subgraph: the higher the frequency of a graph is, the more attentions should be given to that graph. Recently, several algorithms which use level by level approaches to find frequent subgraphs have been developed. Some of the recently emerging applications suggest that other constraints such as connectivity also could be useful in mining subgraphs : more strongly connected parts of a graph are more informative. If we restrict the set of subgraphs to mine to more strongly connected parts, its computational complexity could be decreased significantly. In this paper, we present an efficient algorithm to mine frequent subgraphs that are more strongly connected. Experimental study shows that the algorithm is scaling to larger graphs which have more than ten thousand vertices.

  • PDF

Ideal Freezing Curve Can Avoid the Damage by Latent Heat of Fusion During Freezing (냉동 시 잠재용융열에 의한 피해를 최소화할 수 있는 이상냉동 곡선)

  • 박한기;박영환;윤웅섭;김택수;윤치순;김시호;임상현;김종훈;곽영태
    • Journal of Chest Surgery
    • /
    • v.36 no.4
    • /
    • pp.219-228
    • /
    • 2003
  • Background:Liquid nitrogen freezing techniques have already met with widespread success in biology and medicine as a means of long-term storage for cells and tissues. The use of cryoprotectants such as glycerol and dimethylsulphoxide to prevent ice crystal formation, with carefully controlled rates of freezing and thawing, allows both structure and viability to be retained almost indefinitely. Cryopreservation of various tissues has various con-trolled rates of freezing. Material and Method: To find the optimal freezing curve and the chamber temperature, we approached the thermodynamic calculation of tissues in two ways. One is the direct calculation method. We should know the thermophysical characteristics of all components, latent heat of fusion, area, density and volume, etc. This kind of calculation is so sophisticated and some variables may not be determined. The other is the indirect calculation method. We performed the tissue freezing with already used freezing curve and we observed the actual freezing curve of that tissue. And we modified the freezing curve with several steps of calculation, polynomial regression analysis, time constant calculation, thermal response calculation and inverse calculation of chamber temperature. Result: We applied that freezing program on mesenchymal stem cell, chondrocyte, and osteoblast. The tissue temperature decreased according to the ideal freezing curve without temperature rising. We did not find any differences in survival. The reason is postulated to be that freezing material is too small and contains cellular components. We expect the significant difference in cellular viability if the freezing curve is applied on a large scale of tissues. Conclusion: This program would be helpful in finding the chamber temperature for the ideal freezing curie easily.

SNU 1.5MV Van de Graaff Accelerator (IV) -Fabrication and Aberration Analysis of Magnetic Quadrupole Lens- (SNU 1.5MV 반데그라프 가속기 (IV) -자기 4극 렌즈의 제작과 수차의 분석-)

  • Bak, H.I.;Choi, B.H.;Choi, H.D.
    • Nuclear Engineering and Technology
    • /
    • v.18 no.1
    • /
    • pp.1-8
    • /
    • 1986
  • A magnetic quadrupole doublet was fabricated for use at the pre-target position of SNU 1.5MV Van de Graaff accelerator and then its optical characteristics were measured and analysed. The physical dimensions are: pole length 180mm, aperture radius 25mm, pole tip radius 28.75mm. Material for poles and return yokes is carbon steel KS-SM40C. Coils have 480 turns per one pole and air-cooling is adopted. Applying the d.c. current 2.99$\pm$0.03A to the lens, and using the Hall probe, magnetic field elements $B_{\theta}$ , $B_{\gamma}$, were measured at the selected Points along each coordinate direction r,$\theta$, z. From the area integration and orthogonal polynomial fitting for the measured data, the magnetic Field gradient G=566.3$\pm$2.1 gauss/cm at lens center, the effective length L=208.3$\pm$1.44mm along the lens axis have been obtained. The harmonic contents were determined up to 20-pole from the generalized least squares fitting. The results indicate that sextupole/quadrupole is below 1.4$\pm$0.9% and all the other multipoles are below 0.5% in the region within 18mm radius at the center of lens.

  • PDF