• Title/Summary/Keyword: 추출 함수

Search Result 1,120, Processing Time 0.038 seconds

A Comprehensive Computer Program for Monitor Unit Calculation and Beam Data Management: Independent Verification of Radiation Treatment Planning Systems (방사선치료계획시스템의 독립적 검증을 위한 선량 계산 및 빔데이터 관리 프로그램)

  • Kim, Hee-Jung;Park, Yang-Kyun;Park, Jong-Min;Choi, Chang-Heon;Kim, Jung-In;Lee, Sang-Won;Oh, Heon-Jin;Lim, Chun-Il;Kim, Il-Han;Ye, Sung-Joon
    • Progress in Medical Physics
    • /
    • v.19 no.4
    • /
    • pp.231-240
    • /
    • 2008
  • We developed a user-friendly program to independently verify monitor units (MUs) calculated by radiation treatment planning systems (RTPS), as well as to manage beam database in clinic. The off-axis factor, beam hardening effect, inhomogeneity correction, and the different depth correction were incorporated into the program algorithm to improve the accuracy in calculated MUs. A beam database in the program was supposed to use measured data from routine quality assurance (QA) processes for timely update. To enhance user's convenience, a graphic user interface (GUI) was developed by using Visual Basic for Application. In order to evaluate the accuracy of the program for various treatment conditions, the MU comparisons were made for 213 cases of phantom and for 108 cases of 17 patients treated by 3D conformal radiation therapy. The MUs calculated by the program and calculated by the RTPS showed a fair agreement within ${\pm}3%$ for the phantom and ${\pm}5%$ for the patient, except for the cases of extreme inhomogeneity. By using Visual Basic for Application and Microsoft Excel worksheet interface, the program can automatically generate beam data book for clinical reference and the comparison template for the beam data management. The program developed in this study can be used to verify the accuracy of RTPS for various treatment conditions and thus can be used as a tool of routine RTPS QA, as well as independent MU checks. In addition, its beam database management interface can update beam data periodically and thus can be used to monitor multiple beam databases efficiently.

  • PDF

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.

Surface Tension-Water Saturation Relationship as the Function of Soil Particle Size and Aquifer Depth During Groundwater Air Sparging (대수층 폭기공정에서 토양입경 및 지하수 깊이에 따른 표면장력과 함수율의 상관관계)

  • Kim, Heon-Ki;Kwon, Han-Joon
    • Journal of Soil and Groundwater Environment
    • /
    • v.14 no.6
    • /
    • pp.65-70
    • /
    • 2009
  • Reduction of groundwater surface tension prior to air sparging (SEAS, surfactant-enhanced air sparging) was known to increase air saturation in the aquifer under influence, possibly enhancing the removal rates of volatile contaminants. Although SEAS was known to be efficient for increasing air saturation, little information is available for different hydrogeological settings including soil particle sizes and the depth of aquifer. We investigated water saturations in the sparging influence zone during SEAS using one-dimensional column packed with sands of different particle sizes and different aquifer depths. An anionic surfactant was used to suppress the surface tension of water. Two different sands were used; the air entry pressures of the sands were measured to be $15.0\;cmH_2O$, and $36.3\;cmH_2O$, respectively. No significant difference was observed in the water saturation-surface tension relationship for sands with different particle sizes. As the surface tension decreased, the water saturation decreased to a lowest point and then it increased with further decrease in the surface tension. Both sands reached their lowest water saturations when the surface tension was set approximately at 42 dyne/cm. SEAS was conducted at three different aquifer depths; 41 cm, 81 cm, and 160 cm. Water saturation-surface tension relationship was consistent regardless of the aquifer depth. The size of sparging influence zone during SEAS, measured using two-dimensional model, was found to be similar to the changes in air saturation, measured using one-dimensional model. Considering diverse hydrogeological settings where SEAS to be applied, the results here may provide useful information for designing SEAS process.

A Lower Bound Estimation on the Number of Micro-Registers in Time-Multiplexed FPGA Synthesis (시분할 FPGA 합성에서 마이크로 레지스터 개수에 대한 하한 추정 기법)

  • 엄성용
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.9
    • /
    • pp.512-522
    • /
    • 2003
  • For a time-multiplexed FPGA, a circuit is partitioned into several subcircuits, so that they temporally share the same physical FPGA device by hardware reconfiguration. In these architectures, all the hardware reconfiguration information called contexts are generated and downloaded into the chip, and then the pre-scheduled context switches occur properly and timely. Typically, the size of the chip required to implement the circuit depends on both the maximum number of the LUT blocks required to implement the function of each subcircuit and the maximum number of micro-registers to store results over context switches in the same time. Therefore, many partitioning or synthesis methods try to minimize these two factors. In this paper, we present a new estimation technique to find the lower bound on the number of micro-registers which can be obtained by any synthesis methods, respectively, without performing any actual synthesis and/or design space exploration. The lower bound estimation is very important in sense that it greatly helps to evaluate the results of the previous work and even the future work. If the estimated lower bound exactly matches the actual number in the actual design result, we can say that the result is guaranteed to be optimal. In contrast, if they do not match, the following two cases are expected: we might estimate a better (more exact) lower bound or we find a new synthesis result better than those of the previous work. Our experimental results show that there are some differences between the numbers of micro-registers and our estimated lower bounds. One reason for these differences seems that our estimation tries to estimate the result with the minimum micro-registers among all the possible candidates, regardless of usage of other resources such as LUTs, while the previous work takes into account both LUTs and micro-registers. In addition, it implies that our method may have some limitation on exact estimation due to the complexity of the problem itself in sense that it is much more complicated than LUT estimation and thus needs more improvement, and/or there may exist some other synthesis results better than those of the previous work.

Thermal-Denaturation of File Fish Myofibrillar Protein and Protective Effect of Sucrose, Sorbitol and Amino Acids (말쥐치 근원섬유단백질의 열안정성과 및 가지 첨가제의 영향)

  • CHOI Young-Joon;PYEUN Jae-Hyeung
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.18 no.5
    • /
    • pp.455-463
    • /
    • 1985
  • Thermal-denaturation of myofibrillar protein of dorsal skeletal muscle from file fish was investigated by measuring denaturation constant($K_D$) and thermodynamic parameters at various temperatures. The protective effects of sucrose, sorbitol and amino acids when added individually or combined were also discussed. The denaturation rate as reflected in inactivation of myofibrillar protein Ca-ATPase was followed the first order reaction. The $K_D$ values at $25^{\circ}C,\;30^{\circ}C,\;and\;35^{\circ}C$ were $19.52{\times}10^{-5},\;112.25{\times}10^{-5},\;and\;247.20{\times}10^{-5}$, respectively. The activation energy of the reaction at $30^{\circ}C$ was 43 kcal/mole. The protective effects of sucrose, sorbitol, glycine, alanine and Na-glutamate were increased with the concentration but the effects of sorbitol and Na-glutamate decreased beyond 1.0 mole. Basic amino acids such as arginine and lysine did not revealed any protective effect on the thermal denaturation. In case of mixed addition, the effects of Na-glutamate to glycine, sorbitol to glycine, and sorbitol to sucrose or sorbitol to Na-glutamate were enhanced 1.2 to 7.0 times as much as that of control (ratio of mixing; 1:1, range of concentration; 0.5 to 1.25 mole). Under the frozen condition at $-20^{\circ}C$, two mixtures such as Na-glutamate to glycine and sorbitol to sucrose apparently revealed the protective effects.

  • PDF

The Wormicidal Substances of Fresh Water Fishes on Clonorchis sinensis VII. The Effect of Linolelc Acid and Ethyl Linoleate on Parasite Viability (간흡충에 대한 살충성 물질에 관한 연구)

  • Lee, Jae-Gu;Lee, Sang-Bok;Kim, Pyeong-Gil
    • Parasites, Hosts and Diseases
    • /
    • v.26 no.3
    • /
    • pp.175-178
    • /
    • 1988
  • In an attempt to analyze the clonorchicidal activity of linoleic acid and ethyl linoleate in vitro, the wormicidal effects on Clonorchis sinensis were chronologically monitored in dose titration experiments. Encysted metacercariae were killed within a period of 31, $0{\pm}4.0$ min, 149.3k4. 1 min and $207.0{\pm}13.5$ min with 100.0 mg, 0.1 mg and 0.001 mg linoleic acid, respectively. The time required for the linoleic acid to kill adult worms was 167, $0{\pm}0.8$ min with 100.0mg, $253.0{\pm}0.8$ min with 0.1mg, and $277.0{\pm}0.8$ min at 0.001mg titration. Clonorchicidal activity of ethyl linoleate was relatively delayed as death was observed within $263.3{\pm}2.9$ min, $286.0{\pm}0.5$ min, and $318.0{\pm}0.8$ min for 100.0 mg/ml, 0.1 mg/ml and 0.001 mg/ml concentrations, respectively. The wormicidal effects observed with these pure anti-clonorchal substances were found to be similar to the biological activity of native products derived from the mucus of the fresh water fish.

  • PDF

The Immunohistochemical Analysis for the Expression of Survivin, an Inhibitor of Apoptosis Protein, in Non-small Cell Lung Cancer (비소세포폐암에서 아포프토시스 억제 단백질 Survivin 발현에 관한 면역조직학적 분석)

  • Ko, Mi-Hye;Myoung, Na-Hye;Lee, Jae-Whan;Cho, Eun-Mi;Park, Jae-Seuk;Kim, Keun-Youl;Lee, Kye-Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.6
    • /
    • pp.909-921
    • /
    • 2000
  • Background : Defects in apoptotic signaling pathways play important role in tumor initiation, progression, metastasis and resistance to treatment. Several proteins which may promote tumorigenesis by inhibiting apoptosis were identified. The survivin protein is the member of inhibitor of apoptosis protein(IAPs) family which inhibits apoptosis. Unlike other IAPs, it is expressed in during the fetal period but not in adult differentiated tissues. Many reports have stated that survivin is selectively expressed in many cancer cell lines and cancer tissues. We performed immunohistochemical analysis for survivin expression in non-mall cell lung cancer to get evaluate its clinical implication. Methods : Twenty nine surgically resected lung cancers were examined. Immunohistochemical staining were performed by immuno-peroxidase technique using avidin-biotinylated horseradish pemxidase complex in the formalin-fixed, paraffin-embedded tissue $4{\mu}m$ section. Anti-survivin polyclonal antibody was used for primary antibody and anti-p53 monoclonal antibody was also used to analyze the correlation between survivin and p53 expression. The survivin expression scores were determined by as the sum of the stained area and intensity. Results : Immunohistochemical analysis showed cancer specific expression of survivin in 20 of 29 cases (69.0%). Western blot analysis also showed the selective survivin expression in tumor tissue. There was no correlation between survivin expression and clinicopathological parameters and prognosis. We analyzed the ∞π'elation between survivin expression and p53 expression, but found none. Conclusion: We confirmed the tumor specific expression of survival in non-small cell lung canær. But this expression was not correlated with clinical parameters as well as histology, tumor stage, recurrence, and survival rate. Also it was not statistically correlated with the expression of p53.

  • PDF

A Study on Electron Dose Distribution of Cones for Intraoperative Radiation Therapy (수술중 전자선치료에 있어서 선량분포에 관한 연구)

  • Kang, Wee-Saing;Ha, Sung-Whan;Yun, Hyong-Geun
    • Progress in Medical Physics
    • /
    • v.3 no.2
    • /
    • pp.1-12
    • /
    • 1992
  • For intraoperative radiation therapy using electron beams, a cone system to deliver a large dose to the tumor during surgical operation and to save the surrounding normal tissue should be developed and dosimetry for the cone system is necessary to find proper X-ray collimator setting as well as to get useful data for clinical use. We developed a docking type of a cone system consisting of two parts made of aluminum: holder and cone. The cones which range from 4cm to 9cm with 1cm step at 100cm SSD of photon beam are 28cm long circular tubular cylinders. The system has two 26cm long holders: one for the cones larger than or equal to 7cm diamter and another for the smaller ones than 7cm. On the side of the holder is an aperture for insertion of a lamp and mirror to observe treatment field. Depth dose curve. dose profile and output factor at dept of dose maximum. and dose distribution in water for each cone size were measured with a p-type silicone detector controlled by a linear scanner for several extra opening of X-ray collimators. For a combination of electron energy and cone size, the opening of the X-ray collimator was caused to the surface dose, depths of dose maximum and 80%, dose profile and output factor. The variation of the output factor was the most remarkable. The output factors of 9MeV electron, as an example, range from 0.637 to 1.549. The opening of X-ray collimators would cause the quantity of scattered electrons coming to the IORT cone system. which in turn would change the dose distribution as well as the output factor. Dosimetry for an IORT cone system is inevitable to minimize uncertainty in the clinical use.

  • PDF

Evaluation of Myocardial Oxygen Consumption with $^{11}C$-Acetate and 3D PET/CT: By Applying Recirculation Correction Method and Modified One-Compartmental Tracer Kinetic Modeling ($^{11}C$-Acetate와 3차원 PET/CT를 이용한 심근의 산소 소모량 평가: 재순환 교정법 및 수정 단일구획 추적자 동적 모델 적용)

  • Chun, In-Kook;Hwang, Kyung-Hoon;Lee, Sang-Yoon;Kim, Jin-Su;Lee, Jae-Sung;Shin, Hee-Won;Lee, Min-Kyung;Yoon, Min-Ki;Choe, Won-Sick
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.4
    • /
    • pp.275-284
    • /
    • 2008
  • Purpose: We intended to evaluate myocardial oxygen consumption ($MVO_2)$ by applying recirculation correction and modified one-compartment model to have a reference range of $MVO_2$ in normal young population and to reveal the effect of recirculation on time-activity curve (TAC). Materials and Methods: In nine normal male volunteers with mean age of $26.3{\pm}4.0$, $MVO_2$ was estimated with 925 MBq (25mCi) of $^{11}C$-Acetate (Neuroscience Research Institute, Gachon University of Medicine and Science, Incheon, Korea) and PET/CT (Biograph 6, Siemens Medical Solution, Germany). Analysis software such as $MATLAB^{(R)}$ v7.1 (Mathworks, Inc., United States), $Excel^{(R)}$ 2007 (Microsoft, United States), and $SPSS^{(R)}$ v12.0 (Apache Software Foundation, United States) were used. Twenty three frames were of $12{\times}10$, $5{\times}60$, $3{\times}120$, $2{\times}300's$ duration, respectively. The modified one-compartmental model and the recirculation correction method were applied. Statistical analysis was performed by using Test of Normality, ANOVA and Post-Hoc (Scheffe's) analysis, and p-value less than 0.05 was considered as significant. Results: The normal reference ranges of $MVO_2$ were presented as $3.18-4.64\;{\times}\;10^{-4}\;ml/g/sec$, $1.91-3.94\;{\times}\;10^{-4}\;ml/g/sec$, $4.31-6.40\;{\times}\;10^{-4}\;ml/g/sec$, $2.84-4.53\;{\times}\;10^{-4}\;ml/g/sec$ and $3.42-5.00\;{\times}\;10^{-4}\;ml/g/sec$ in the septum, the inferior wall, the lateral wall, the anterior wall and the entire wall, respectively. In addition, it was noted that the dual exponentiality of the clearance curve is due to the recirculation effect and that the characteristic of the curve is essentially mono-exponential. Conclusion: $^{11}C$-Acetate is a radiotracer worthwhile to assess $MVO_2$. Re-circulated $^{11}C$ can influence TAC of $^{11}C$ in myocadia and so the recirculation correction must be considered when measuring $MVO_2$.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.