• Title/Summary/Keyword: reliable control

Search Result 1,902, Processing Time 0.036 seconds

Applicability of American and European Spirometry Repeatability Criteria to Korean Adults (한국 성인을 대상으로 한 미국 및 유럽 폐활량 검사 재현성 기준의 유용성)

  • Park, Byung Hoon;Park, Moo Suk;Jung, Woo Young;Byun, Min Kwang;Park, Seon Cheol;Shin, Sang Yun;Jeon, Han Ho;Jung, Kyung Soo;Moon, Ji Ae;Kim, Se Kyu;Chang, Joon;Kim, Sung Kyu;Ahn, Song Vogue;Oh, Yeon-Mok;Lee, Sang Do;Kim, Young Sam
    • Tuberculosis and Respiratory Diseases
    • /
    • v.63 no.5
    • /
    • pp.405-411
    • /
    • 2007
  • Background: The objective of this study was to evaluate the clinical applicability of the repeatability criteria recommended by the American Thoracic Society/European Respiratory Society (ATS/ERS) spirometry guidelines and to determine which factors affect the repeatability of spirometry in Korean adults. Methods: We reviewed the spirometry data of 4,663 Korean adults from the Korean National Health and Nutritional Examination Survey (KNHANES) Chronic Obstructive Pulmonary Disease Cohort (COPD cohort) and the Community-based Cohort Study VI-Fishing village/Islands (community cohort). We measured the anthropometric factors and differences between the highest and second-highest FVC (dFVC) and $FEV_1$ ($dFEV_1$) from prebronchodilator spirometry. Analyses included the distribution of dFVC and $dFEV_1$, comparison of the values meeting the 1994 ATS repeatability criteria with the values meeting the 2005 ATS/ERS repeatability criteria, and the performance of linear regression for evaluating the influence of subject characteristics and the change of criteria on the spiro-metric variability. Results: About 95% of subjects were able to reproduce FVC and $FEV_1$ within 150 ml. The KNHANES based on the 1994 ATS guidelines showed poorer repeatability than the COPD cohort and community cohort based on the 2005 ATS/ERS guidelines. Demographic and anthropometric factors had little effect on repeatability, explaining only 0.5 to 3%. Conclusion: We conclude that the new spirometry repeatability criteria recommended by the 2005 ATS/ERS guidelines is also applicable to Korean adults. The repeatability of spirometry depends little on individual characteristics when an experienced technician performs testing. Therefore, we suggest that sustained efforts for public awareness of new repeatability criteria, quality control of spirograms, and education of personnel are needed for reliable spirometric results.

Effects of different cooking methods on folate retention in selected mushrooms (다양한 조리법에 따른 버섯류의 엽산 리텐션)

  • Park, Su-Jin;Park, Sun-Hye;Chung, Heajung;Lee, Junsoo;Hyun, Taisun;Chun, Jiyeon
    • Food Science and Preservation
    • /
    • v.24 no.8
    • /
    • pp.1103-1112
    • /
    • 2017
  • This study was performed to investigate the effects of different cooking methods (boiling, roasting, stir-frying, and deep-frying) on folate retention in 6 kinds of mushrooms (Beech-, button-, Juda's ear-, oak-, oyster-, and winter-mushrooms) frequently consumed in Korea. In order to assure reliability of analytical data, trienzyme extraction-L casei method was verified and analytical quality control was also evaluated. Folate contents of mushrooms varied by 6.04-64.82 g/100 g depending on the type of mushrooms. and were significantly affected by cooking methods. Depending on cooking methods, folate contents of mushrooms decreased by 22-48%, 2-31%, and 17-56% for Juda's ear-, oak- and oyster-mushrooms, respectively, while 17-90% of folate was increased in Beech mushroom. Overall, the largest weight loss was found in boiled mushrooms, but the lowest one in deep-fried samples. True folate retention rates considering processing factor were less than 100% for all cooked mushrooms except for Beech samples. Overall, folate loss was the largest by boiling with water but the smallest by deep-frying. Both accuracy and precision of trienzyme extraction-L-casei method were excellent based on a recovery close to 100% and coefficient variations less than 3%. Quality control chart of folate analysis (n=26) obtained during the entire study and an international proficiency test (z-score=-0.5) showed that trienzyme extraction-L casei method is reliable enough for production of national folate database.

Studies on the Derivation of the Instantaneous Unit Hydrograph for Small Watersheds of Main River Systems in Korea (한국주요빙계의 소유역에 대한 순간단위권 유도에 관한 연구 (I))

  • 이순혁
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.19 no.1
    • /
    • pp.4296-4311
    • /
    • 1977
  • This study was conducted to derive an Instantaneous Unit Hydrograph for the accurate and reliable unitgraph which can be used to the estimation and control of flood for the development of agricultural water resources and rational design of hydraulic structures. Eight small watersheds were selected as studying basins from Han, Geum, Nakdong, Yeongsan and Inchon River systems which may be considered as a main river systems in Korea. The area of small watersheds are within the range of 85 to 470$\textrm{km}^2$. It is to derive an accurate Instantaneous Unit Hydrograph under the condition of having a short duration of heavy rain and uniform rainfall intensity with the basic and reliable data of rainfall records, pluviographs, records of river stages and of the main river systems mentioned above. Investigation was carried out for the relations between measurable unitgraph and watershed characteristics such as watershed area, A, river length L, and centroid distance of the watershed area, Lca. Especially, this study laid emphasis on the derivation and application of Instantaneous Unit Hydrograph (IUH) by applying Nash's conceptual model and by using an electronic computer. I U H by Nash's conceptual model and I U H by flood routing which can be applied to the ungaged small watersheds were derived and compared with each other to the observed unitgraph. 1 U H for each small watersheds can be solved by using an electronic computer. The results summarized for these studies are as follows; 1. Distribution of uniform rainfall intensity appears in the analysis for the temporal rainfall pattern of selected heavy rainfall event. 2. Mean value of recession constants, Kl, is 0.931 in all watersheds observed. 3. Time to peak discharge, Tp, occurs at the position of 0.02 Tb, base length of hlrdrograph with an indication of lower value than that in larger watersheds. 4. Peak discharge, Qp, in relation to the watershed area, A, and effective rainfall, R, is found to be {{{{ { Q}_{ p} = { 0.895} over { { A}^{0.145 } } }}}} AR having high significance of correlation coefficient, 0.927, between peak discharge, Qp, and effective rainfall, R. Design chart for the peak discharge (refer to Fig. 15) with watershed area and effective rainfall was established by the author. 5. The mean slopes of main streams within the range of 1.46 meters per kilometer to 13.6 meter per kilometer. These indicate higher slopes in the small watersheds than those in larger watersheds. Lengths of main streams are within the range of 9.4 kilometer to 41.75 kilometer, which can be regarded as a short distance. It is remarkable thing that the time of flood concentration was more rapid in the small watersheds than that in the other larger watersheds. 6. Length of main stream, L, in relation to the watershed area, A, is found to be L=2.044A0.48 having a high significance of correlation coefficient, 0.968. 7. Watershed lag, Lg, in hrs in relation to the watershed area, A, and length of main stream, L, was derived as Lg=3.228 A0.904 L-1.293 with a high significance. On the other hand, It was found that watershed lag, Lg, could also be expressed as {{{{Lg=0.247 { ( { LLca} over { SQRT { S} } )}^{ 0.604} }}}} in connection with the product of main stream length and the centroid length of the basin of the watershed area, LLca which could be expressed as a measure of the shape and the size of the watershed with the slopes except watershed area, A. But the latter showed a lower correlation than that of the former in the significance test. Therefore, it can be concluded that watershed lag, Lg, is more closely related with the such watersheds characteristics as watershed area and length of main stream in the small watersheds. Empirical formula for the peak discharge per unit area, qp, ㎥/sec/$\textrm{km}^2$, was derived as qp=10-0.389-0.0424Lg with a high significance, r=0.91. This indicates that the peak discharge per unit area of the unitgraph is in inverse proportion to the watershed lag time. 8. The base length of the unitgraph, Tb, in connection with the watershed lag, Lg, was extra.essed as {{{{ { T}_{ b} =1.14+0.564( { Lg} over {24 } )}}}} which has defined with a high significance. 9. For the derivation of IUH by applying linear conceptual model, the storage constant, K, with the length of main stream, L, and slopes, S, was adopted as {{{{K=0.1197( {L } over { SQRT {S } } )}}}} with a highly significant correlation coefficient, 0.90. Gamma function argument, N, derived with such watershed characteristics as watershed area, A, river length, L, centroid distance of the basin of the watershed area, Lca, and slopes, S, was found to be N=49.2 A1.481L-2.202 Lca-1.297 S-0.112 with a high significance having the F value, 4.83, through analysis of variance. 10. According to the linear conceptual model, Formular established in relation to the time distribution, Peak discharge and time to peak discharge for instantaneous Unit Hydrograph when unit effective rainfall of unitgraph and dimension of watershed area are applied as 10mm, and $\textrm{km}^2$ respectively are as follows; Time distribution of IUH {{{{u(0, t)= { 2.78A} over {K GAMMA (N) } { e}^{-t/k } { (t.K)}^{N-1 } }}}} (㎥/sec) Peak discharge of IUH {{{{ {u(0, t) }_{max } = { 2.78A} over {K GAMMA (N) } { e}^{-(N-1) } { (N-1)}^{N-1 } }}}} (㎥/sec) Time to peak discharge of IUH tp=(N-1)K (hrs) 11. Through mathematical analysis in the recession curve of Hydrograph, It was confirmed that empirical formula of Gamma function argument, N, had connection with recession constant, Kl, peak discharge, QP, and time to peak discharge, tp, as {{{{{ K'} over { { t}_{ p} } = { 1} over {N-1 } - { ln { t} over { { t}_{p } } } over {ln { Q} over { { Q}_{p } } } }}}} where {{{{K'= { 1} over { { lnK}_{1 } } }}}} 12. Linking the two, empirical formulars for storage constant, K, and Gamma function argument, N, into closer relations with each other, derivation of unit hydrograph for the ungaged small watersheds can be established by having formulars for the time distribution and peak discharge of IUH as follows. Time distribution of IUH u(0, t)=23.2 A L-1S1/2 F(N, K, t) (㎥/sec) where {{{{F(N, K, t)= { { e}^{-t/k } { (t/K)}^{N-1 } } over { GAMMA (N) } }}}} Peak discharge of IUH) u(0, t)max=23.2 A L-1S1/2 F(N) (㎥/sec) where {{{{F(N)= { { e}^{-(N-1) } { (N-1)}^{N-1 } } over { GAMMA (N) } }}}} 13. The base length of the Time-Area Diagram for the IUH was given by {{{{C=0.778 { ( { LLca} over { SQRT { S} } )}^{0.423 } }}}} with correlation coefficient, 0.85, which has an indication of the relations to the length of main stream, L, centroid distance of the basin of the watershed area, Lca, and slopes, S. 14. Relative errors in the peak discharge of the IUH by using linear conceptual model and IUH by routing showed to be 2.5 and 16.9 percent respectively to the peak of observed unitgraph. Therefore, it confirmed that the accuracy of IUH using linear conceptual model was approaching more closely to the observed unitgraph than that of the flood routing in the small watersheds.

  • PDF

Analysis of Variation for Parallel Test between Reagent Lots in in-vitro Laboratory of Nuclear Medicine Department (핵의학 체외검사실에서 시약 lot간 parallel test 시 변이 분석)

  • Chae, Hong Joo;Cheon, Jun Hong;Lee, Sun Ho;Yoo, So Yeon;Yoo, Seon Hee;Park, Ji Hye;Lim, Soo Yeon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.51-58
    • /
    • 2019
  • Purpose In in-vitro laboratories of nuclear medicine department, when the reagent lot or reagent lot changes Comparability test or parallel test is performed to determine whether the results between lots are reliable. The most commonly used standard domestic laboratories is to obtain %difference from the difference in results between two lots of reagents, and then many laboratories are set the standard to less than 20% at low concentrations and less than 10% at medium and high concentrations. If the range is deviated from the standard, the test is considered failed and it is repeated until the result falls within the standard range. In this study, several tests are selected that are performed in nuclear medicine in-vitro laboratories to analyze parallel test results and to establish criteria for customized percent difference for each test. Materials and Methods From January to November 2018, the result of parallel test for reagent lot change is analyzed for 7 items including thyroid-stimulating hormone (TSH), free thyroxine (FT4), carcinoembryonic antigen (CEA), CA-125, prostate-specific antigen (PSA), HBs-Ab and Insulin. The RIA-MAT 280 system which adopted the principle of IRMA is used for TSH, FT4, CEA, CA-125 and PSA. TECAN automated dispensing equipment and GAMMA-10 is used to measure insulin test. For the test of HBs-Ab, HAMILTON automated dispensing equipment and Cobra Gamma ray measuring instrument are used. Separate reagent, customized calibrator and quality control materials are used in this experiment. Results 1. TSH [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [14.8 / 4.4 / 3.7 / 0.0 ] C-2(middle concentration) [10.1 / 4.2 / 3.7 / 0.0] 2. FT4 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [10.0 / 4.2 / 3.9 / 0.0] C-2(high concentration) [9.6 / 3.3 / 3.1 / 0.0 ] 3. CA-125 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 4.3 / 4.3 / 0.3] C-2(high concentration) [6.5 / 3.5 / 4.3 / 0.4] 4. CEA [%diffrence Max / Mean / median] (P-value by t-test > 0.05) C-1(low concentration) [9.8 / 4.2 / 3.0 / 0.0] C-2(middle concentration) [8.7 / 3.7 / 2.3 / 0.3] 5. PSA [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [15.4 / 7.6 / 8.2 / 0.0] C-2(middle concentration) [8.8 / 4.5 / 4.8 / 0.9] 6. HBs-Ab [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 3.7 / 2.7 / 0.2] C-2(high concentration) [8.9 / 4.1 / 3.6 / 0.3] 7. Insulin [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [8.7 / 3.1 / 2.4 / 0.9] C-2(high concentration) [8.3 / 3.2 / 1.5 / 0.1] In some low concentration measurements, the percent difference is found above 10 to nearly 15 percent in result of target value calculated at a lower concentration. In addition, when the value is measured after Standard level 6, which is the highest value of reagents in the dispensing sequence, the result would have been affected by a hook effect. Overall, there was no significant difference in lot change of quality control material (p-value>0.05). Conclusion Variations between reagent lots are not large in immunoradiometric assays. It is likely that this is due to the selection of items that have relatively high detection rate in the immunoradiometric method and several remeasurements. In most test results, the difference was less than 10 percent, which was within the standard range. TSH control level 1 and PSA control level 1, which have low concentration target value, exceeded 10 percent more than twice, but it did not result in a value that was near 20 percent. As a result, it is required to perform a longer period of observation for more homogenized average results and to obtain laboratory-specific acceptance criteria for each item. Also, it is advised to study observations considering various variables.

A Clinical Evaluation of Splanchnic Nerve Block (내장신경차단에 관한 임상적 연구)

  • Kim, Soo-Yeoun;Oh, Hung-Kun;Yoon, Duek-Mi;Shin, Yang-Sik;Lee, Youn-Woo;Kim, Jong-Rae
    • The Korean Journal of Pain
    • /
    • v.1 no.1
    • /
    • pp.34-46
    • /
    • 1988
  • Intractable pain from advanced carcinoma of the upper abdomen is difficult to manage. One method used to control pain associated with these malignancies is to block off the splanchnic nerve. In 1919 Kappis described a technique by which the splanchnic nerve of the upper abdomen could be anesthetized, using a percutaneous injection. This method has been used for the relief of upper abdominal pain due to hematoma and cancer of the pancreas, stomach, gall bladder, bile duct, and colon. During the Period from November 1968 to January 1986, this method was used in 208 cases of malignancy at Severance Hospital and clinically evaluated. Patients were retroactively grouped according to the stage of development of technique used. Twelve patients who received the treatment in the period from November 1968 to March 1977 were designate4i as group 1, 26 patients from April 1977 to April 1979 as group 2, and 170 from May 1979 to January 1986 as group 3. The results are as follows: 1) The number of patients receiving splanchnic nerve block has been increasing since 1977. 2) A total of 208 patients, including 133 males and 75 females, ranging in age from 18 to 84 and averaging 51. 3) The causes of pain were stomach cancer 90, pancreatic cancer 69, and miscellaneous cancer 49 cases respectively. 4) There were 57.7% who had surgery. and 3.7% of whom had chemotherapy before the splanchnic nerve block was done. 5) These blocks were carried out with the patient in the prone position as described by Dr. Moore. For group 2 and 3, C-arm image intensifier was used. In group 1, a 22 gauze loom long needle was inserted at the lower border of the 12th rib on each aide about 7\;cm from the midline. The average distance from the midline was $6.60{\pm}0.61\;cm$ on the left side and $6.60{\pm}0.83\;cm$ on the right side in group 2, and $5.46{\pm}0.76\;cm$ on the left side and $5.49{\pm}0.69\;cm$ on the right side in group 3. The average depth to which the needle was inserted was $8.60{\pm}0.52\;cm$ on the left side and $8.74{\pm}0.60\;cm$ on the right side in group 2, and $8.96{\pm}0.63\;cm$ on the left side and $9.18{\pm}0.57\;cm$ on the right side in group 3. 6) The points of the inserted needles were positioned in the upper quarter anteriorly, 51.8% on the left side and 54.4% n the right side of the L1 vertebra by lateral roentgenogram in group 3. The inserted needle points were located in the upper and anterolateral part, of the L1 vertebra 68.5% on the left side and 60.6won the right side, on the anteroposterior rentgenogram in group 3. The needle tip was not advanced beyond the anterior margin of the vertebral body. 7) In some case of group 3, contrast media was injected before the block was done. It shows, the spread upward along the anterior mal gin of the vertebral body. 8) The concentration and the average amount of drug used in each group was as follows: In group 1, $39.17{\pm}6.69\;ml$ of 0.5% -l% lidocaine or 0.25% bupivacaine were injected for the test block and one to three days after the test block $40.00{\pm}4.26\;ml$ of 50% alcohol was injected for the semipermanent block. In group 2, $13.75{\pm}4.88\;ml$ of 1% lidocaine were used as the test block and followed by $46.17{\pm}4.37\;ml$ of 50% alcohol was injected as the semipermanent block. In group 3, $15.63{\pm}1.19\;ml$ of 1% lidocaine for test block followed by $15.62{\pm}1.20\;ml$ of pure alcohol and $16.05{\pm}2.58\;ml$ of 50% alcohol for semipermanent block were injected. 9) The result of the test block was satisfactory in all cases. However the semipermanent block was 83.3 percent of the patients in group 1 who received relief from pain for at least 2 weeks after the block, 73.1% in group 2, and 91.8% in group 3. In these unsuccessful cases, 2 cases in group 1 were controlled by narcotics but 7 cases in group 2 and 14 cases in group 3 received the same splanchnic nerve block 1 or 2 times again within 2 weeks. But, in some cases it was 3 to i months before the 2nd block and in 1 cases even 7 years. 10) The most common complications of splanchnic nerve block were hypotensino(25.5%) occasional flushing of the face, nausea, vomiting, and chest discomfort. 11) For the patients in group 3, the supplemental block most commonly used was a continuous epidural block; it was used as a diagnostic block and to afford relief from pain before the splanchnic nerve block was done. 12) The interval between the receiving of the alcohol block and discharge was from 5 to 8 days in 61 cases(31.1%) and from 1 to 2 days in 48 cases(24.5%). From the above results, it can be concluded that the splanchnic nerve block done in the prone position with pure and 50% alcohol immediately after an effective test block with 1% lidocaine under C-arm fluoroscopic control is satisfactory and reliable. How to minimize the repeat block is still a problem to be solved.

  • PDF

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

An Improved Method to Determine Corn (Zea mays L.) Plant Response to Glyphosate (Glyphosate에 대한 옥수수 반응의 개선된 검정방법)

  • Kim, Jin-Seog;Lee, Byung-Hoi;Kim, So-Hee;Min, Suk-Ki;Choi, Jung-Sup
    • Journal of Plant Biotechnology
    • /
    • v.33 no.1
    • /
    • pp.57-62
    • /
    • 2006
  • Several methods for determining the response of corn to glyphosate were investigated to provide a fast and reliable method for identifying glyphosate-resistant corn in vivo. Two bioassays were developed. One assay is named 'whole plant / leaf growth assay', in which the herbicide glyphosate is applied on the upper part of 3rd leaf and the growth of herbicide-untreated 4th leaf is measured at 3 day after treatment. in this assay, the leaf growth of conventional corn was inhibited in a dose dependent from 50 to $1600{\mu}g/mL$ of glyphosate and growth inhibition at $1600{\mu}g/mL$ was 55% of untreated control. The assay has the potential to be used especially in the case that the primary cause of glyphosate resistance is related with a reduction of the herbicide translocation. Another assay is named 'leaf segment / shikimate accumulation assay', in which the four excised leaf segments ($4{\times}4mm$) are placed in each well of a 48-well microtiter plate containing $200{\mu}L$ test solution and the amount of shikimate is determined after incubation for 24 h in continuous light at $25^{\circ}C$. In this assay, 0.33% sucrose added to basic test solution enhanced a shikimate accumulation by 3 to 4 times and the shikimate accumulation was linearly occurred from 2 to $8{\mu}g/mL$ of glyphosate, showing an improved response to the method described by Shaner et al. (2005). The leaf segment / shikimate accumulation assay is simple and robust and has the potential to be used as a high throughput assay in the case that the primary cause of glyphosate resistance is related with EPSPS, target site of the herbicide. Taken together, these two assays would be highly useful to initially select the lines obtained after transformation, to investigate the migration of glyphosate-resistant gene into other weeds and to detect a weedy glyphosate-resistant corn in field.

Development of an Automatic Seed Marker Registration Algorithm Using CT and kV X-ray Images (CT 영상 및 kV X선 영상을 이용한 자동 표지 맞춤 알고리듬 개발)

  • Cheong, Kwang-Ho;Cho, Byung-Chul;Kang, Sei-Kwon;Kim, Kyoung-Joo;Bae, Hoon-Sik;Suh, Tae-Suk
    • Radiation Oncology Journal
    • /
    • v.25 no.1
    • /
    • pp.54-61
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: The purpose of this study is to develop a practical method for determining accurate marker positions for prostate cancer radiotherapy using CT images and kV x-ray images obtained from the use of the on- board imager (OBI). $\underline{Materials\;and\;Methods}$: Three gold seed markers were implanted into the reference position inside a prostate gland by a urologist. Multiple digital image processing techniques were used to determine seed marker position and the center-of-mass (COM) technique was employed to determine a representative reference seed marker position. A setup discrepancy can be estimated by comparing a computed $COM_{OBI}$ with the reference $COM_{CT}$. A proposed algorithm was applied to a seed phantom and to four prostate cancer patients with seed implants treated in our clinic. $\underline{Results}$: In the phantom study, the calculated $COM_{CT}$ and $COM_{OBI}$ agreed with $COM_{actual}$ within a millimeter. The algorithm also could localize each seed marker correctly and calculated $COM_{CT}$ and $COM_{OBI}$ for all CT and kV x-ray image sets, respectively. Discrepancies of setup errors between 2D-2D matching results using the OBI application and results using the proposed algorithm were less than one millimeter for each axis. The setup error of each patient was in the range of $0.1{\pm}2.7{\sim}1.8{\pm}6.6\;mm$ in the AP direction, $0.8{\pm}1.6{\sim}2.0{\pm}2.7\;mm$ in the SI direction and $-0.9{\pm}1.5{\sim}2.8{\pm}3.0\;mm$ in the lateral direction, even though the setup error was quite patient dependent. $\underline{Conclusion}$: As it took less than 10 seconds to evaluate a setup discrepancy, it can be helpful to reduce the setup correction time while minimizing subjective factors that may be user dependent. However, the on-line correction process should be integrated into the treatment machine control system for a more reliable procedure.

Extra Dose Measurement of Differential Slice Thickness of MVCT Image with Helical Tomotherapy (토모테라피 치료 시 MVCT Image의 Slice Thickness 차이에 따른 선량 비교)

  • Lee, Byungkoo;Kang, Suman
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.2
    • /
    • pp.145-149
    • /
    • 2013
  • Helical Tomotherapy is an innovative means of delivering intensity modulated radiation therapy (IMRT) using a device that merges features of a linear accelerator and helical computed tomography (CT) scanner. Hereat, during helical tomotherapy process, megavoltage computed tomography (MVCT) image are usually used for guiding the precise set-up of patient before/after treatment delivery. But which would certainly increase the total dose for patients, this study was to investigate the imaging dose of MVCT using the cylindrical "Cheese" phantom on a tomotherapy machine. A set of cylindrical "Cheese" phantom was adopted for scanning with respectively pitch value (1, 2, 3 mm) with same number slice (10 slice), same length (approximately 9 cm) and phantom set-ups on the couch of tomotherapy system. The average MVCT imaging dose were measured using A1SL ion chamber inserted in the phantom with preset geometry. The MVCT scanning average dose for the cylindrical "Cheese" phantom was 2.24 cGy, 1.02 cGy, 0.81 cGy during respectively pitch value (pitch 1, 2, 3 mm) with same number slice (10 slice), and same length's average dose was 2.47 cGy, 1.28 cGy, 0.88 cGy respectively (pitch 1, 2, 3 mm). Two major parameters, the assigned pitch numbers and scanning length, where the most important impacts to the dose variation. The MVCT dose was inversely proportional to the CT pitch value. The results may provide a reliable guidance for proper planning design of the scanning region, which is valuable to help minimize the extra dose to patient. Questionnaires were distributed to Radiology departments at hospitals with 300 sickbeds throughout the Pohang region of North Gyeongsang Province concerning awareness and performance levels of infection control. The investigation included measurements of the pollution levels of imaging equipment and assistive apparatuses in order to prepare a plan for the activation of prevention and management of hospital infections. The survey was designed to question respondents in regards to personal data, infection management prevention education, and infection management guidelines.