• Title/Summary/Keyword: 수준

Search Result 44,156, Processing Time 0.069 seconds

A Study on the Meaning and Future of the Moon Treaty (달조약의 의미와 전망에 관한 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.21 no.1
    • /
    • pp.215-236
    • /
    • 2006
  • This article focused on the meaning of the 1979 Moon Treaty and its future. Although the Moon Treaty is one of the major 5 space related treaties, it was accepted by only 11 member states which are non-space powers, thus having the least enfluences on the field of space law. And this article analysed the relationship between the 1979 Moon Treay and 1967 Space Treaty which was the first principle treaty, and searched the meaning of the "Common Heritage of Mankind(hereinafter CHM)" stipulated in the Moon treaty in terms of international law. This article also dealt with the present and future problems arising from the Moon Treaty. As far as the 1967 Space Treaty is concerned the main standpoint is that outer space including the moon and the other celestial bodies is res extra commercium, areas not subject to national appropriation like high seas. It proclaims the principle non-appropriation concerning the celestial bodies in outer space. But the concept of CHM stipulated in the Moon Treaty created an entirely new category of territory in international law. This concept basically conveys the idea that the management, exploitation and distribution of natural resources of the area in question are matters to be decided by the international community and are not to be left to the initiative and discretion of individual states or their nationals. Similar provision is found in the 1982 Law of the Sea Convention that operates the International Sea-bed Authority created by the concept of CHM. According to the Moon Treaty international regime will be established as the exploitation of the natural resources of the celestial bodies other than the Earth is about to become feasible. Before the establishment of an international regime we could imagine moratorium upon the expoitation of the natural resources on the celestial bodies. But the drafting history of the Moon Treaty indicates that no moratorium on the exploitation of natural resources was intended prior to the setting up of the international regime. So each State Party could exploit the natural resources bearing in mind that those resouces are CHM. In this respect it would be better for Korea, now not a party to the Moon Treaty, to be a member state in the near future. According to the Moon Treaty the efforts of those countries which have contributed either directly or indirectly the exploitation of the moon shall be given special consideration. The Moon Treaty, which although is criticised by some space law experts represents a solid basis upon which further space exploration can continue, shows the expression of the common collective wisdom of all member States of the United Nations and responds the needs and possibilities of those that have already their technologies into outer space.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

THE EFFECT OF INTERMITTENT COMPOSITE CURING ON MARGINAL ADAPTATION (복합레진의 간헐적 광중합 방법이 변연적합도에 미치는 영향)

  • Yun, Yong-Hwan;Park, Sung-Ho
    • Restorative Dentistry and Endodontics
    • /
    • v.32 no.3
    • /
    • pp.248-259
    • /
    • 2007
  • The aim of this research was to study the effect of intermittent polymerization on marginal adaptation by comparing the marginal adaptation of intermittently polymerized composite to that of continuously polymerized composite. The materials used for this study were Pyramid (Bisco Inc., Schaumburg, U.S.A.) and Heliomolar (Ivoclar Vivadent, Liechtenstein) . The experiment was carried out in class II MOD cavities prepared in 48 extracted human maxillary premolars. The samples were divided into 4 groups by light curing method: group 1- continuous curing (60s light on with no light off), group 2-intermittent curing (cycles of 3s with 2s light on & 1s light off for 90s); group 3- intermittent curing (cycles of 2s with 1s light on & 1s light off for 120s); group 4- intermittent curing (cycles of 3s with 1s light on & 2s light off for 180s). Consequently the total amount of light energy radiated was same in all the groups. Each specimen went through thermo-mechanical loading (TML) which consisted of mechanical loading (720,000 cycles, 5.0 kg) with a speed of 120 rpm for 100hours and thermocycling (6000 thermocycles of alternating water of $50^{\circ}C$ and $55^{\circ}C$). The continuous margin (CM) (%) of the total margin and regional margins, occlusal enamel (OE), vertical enamel (VE), and cervical enamel (CE) was measured before and after TML under a $\times200$ digital light microscope. Three-way ANOVA and Duncan's Multiple Range Test was performed at 95% level of confidence to test the effect of 3 variables on CM (%) of the total margin: light curing conditions, composite materials and effect of TML. In each group, One-way ANOVA and Duncan's Multiple Range Test was additionally performed to compare CM (%) of regions (OE, VE CE). The results indicated that all the three variables were statistically significant (p < 0.05). Before TML, in groups using Pyramid, groups 3 and 4 showed higher CM (%) than groups 1 and 2, and in groups using Heliomolar. groups 3 and 4 showed higher CM (%) than group 1 (p < 0.05). After TML, in both Pyramid and Heliomo)ar groups, group 3 showed higher CM (%) than group 1 (p < 0.05) CM (%) of the regions are significantly different in each group (p < 0.05). Before TML, no statistical difference was found between groups within the VE and CE region. In the OE region, group 4 of Pyramid showed higher CM (%) than group 2, and groups 2 and 4 of Heliomolar showed higher CM (%) than group 1 (p < 0.05). After TML, no statistical difference was found among groups within the VE and CE region. In the OE region, group 3 of Pyramid showed higher CM (%) than groups 1 and 2, and groups 2,3 and 4 of Heliomolar showed higher CM (%) than group 1 (p < 0.05). It was concluded that intermittent polymerization may be effective in reducing marginal gap formation.

Effect of Hydrogen Peroxide Enema on Recovery of Carbon Monoxide Poisoning (과산화수소 관장이 급성 일산화탄소중독의 회복에 미치는 영향)

  • Park, Won-Kyun;Chae, E-Up
    • The Korean Journal of Physiology
    • /
    • v.20 no.1
    • /
    • pp.53-63
    • /
    • 1986
  • Carbon monoxide(CO) poisoning has been one of the major environmental problems because of the tissue hypoxia, especially brain tissue hypoxia, due to the great affinity of CO with hemoglobin. Inhalation of the pure oxygen$(0_2)$ under the high atmospheric pressure has been considered as the best treatment of CO poisoning by the supply of $0_2$ to hypoxic tissues with dissolved from in plasma and also by the rapid elimination of CO from the carboxyhemoglobin(HbCO). Hydrogen peroxide $(H_2O_2)$ was rapidly decomposed to water and $0_2$ under the presence of catalase in the blood, but the intravenous administration of $H_2O_2$ is hazardous because of the formation of methemoglobin and air embolism. However, it was reported that the enema of $H_2O_2$ solution below 0.75% could be continuously supplied $0_2$ to hypoxic tissues without the hazards mentioned above. This study was performed to evaluate the effect of $H_2O_2$ enema on the elimination of CO from the HbCO in the recovery of the acute CO poisoning. Rabbits weighting about 2.0 kg were exposed to If CO gas mixture with room air for 30 minutes. After the acute CO poisoning, 30 rabbits were divided into three groups relating to the recovery period. The first group T·as exposed to the room air and the second group w·as inhalated with 100% $0_2$ under 1 atmospheric pressure. The third group was administered 10 ml of 0.5H $H_2O_2$ solution per kg weight by enema immediately after CO poisoning and exposed to the room air during the recovery period. The arterial blood was sampled before and after CO poisoning ana in 15, 30, 60 and 90 minutes of the recovery period. The blood pH, $Pco_2\;and\;Po_2$ were measured anaerobically with a Blood Gas Analyzer and the saturation percentage of HbCO was measured by the Spectrophotometric method. The effect of $H_2O_2$ enema on the recovery from the acute CO poisoning was observed and compared with the room air group and the 100% $0_2$ inhalation group. The results obtained from the experiment are as follows: The pH of arterial blood was significantly decreased after CO poisoning and until the first 15 minutes of the recovery period in all groups. Thereafter, it was slowly increased to the level of the before CO poisoning, but the recovery of pH of the $H_2O_2$ enema group was more delayed than that of the other groups during the recovery period. $Paco_2$ was significantly decreased after CO poisoning in all groups. Boring the recovery Period, $Paco_2$ of the room air group was completely recovered to the level of the before CO Poisoning, but that of the 100% $O_2$ inhalation group and the $H_2O_2$ enema group was not recovered until the 90 minutes of the recovery period. $Paco_2$ was slightly decreased after CO poisoning. During the recovery Period, it was markedly increased in the first 15 minutes and maintained the level above that before CO Poisoning in all groups. Furthermore $Paco_2$ of the $H_2O_2$ enema group was 102 to 107 mmHg and it was about 10 mmHg higher than that of the room air group during the recovery period. The saturation percentage of HbCO was increased up to the range of 54 to 72 percents after CO poisoning and in general it was generally diminished during the recovery period. However in the $H_2O_2$ enema group the diminution of the saturation percentage of HbCO was generally faster than that of the 100% $O_2$ inhalation group and the room air group, and its diminution in the 100% $O_2$ inhalation group was also slightly faster than that of the room air group at the relatively later time of the recovery period. In conclusion, the enema of 0.5% $H_2O_2$ solution is seems to facilitate the elimination of CO from the HbCO in the blood and increase $Paco_2$ simultaneously during the recovery period of the acute CO poisoning.

  • PDF

Light and Electron Microscopy of Gill and Kidney on Adaptation of Tilapia(Oreochromis niloticus) in the Various Salinities (틸라피아의 해수순치시(海水馴致時) 아가미와 신장(腎臟)의 광학(光學) 및 전자현미경적(電子顯微鏡的) 관찰(觀察))

  • Yoon, Jong-Man;Cho, Kang-Yong;Park, Hong-Yang
    • Applied Microscopy
    • /
    • v.23 no.2
    • /
    • pp.27-40
    • /
    • 1993
  • This study was taken to examine the light microscopic and ultrastructural changes of gill and kidney of female tilapia{Oreochromis niloticus) adapted in 0%o, 10%o, 20%o, and 30%o salt concentrations, respectively, by light, scanning and transmission electron microscope. The results obtained in these experiments were summarized as follows: Gill chloride cell hyperplasia, gill lamellar epithelial separation, kidney glomerular shrinkage, blood congestion in kidneys and deposition of hyalin droplets in kidney glomeruli, tubules were the histological alterations in Oreochromis niloticus. Incidence and severity of gill chloride cell hyperplasia rapidly increased together with increase of salinity, and the number of chloride cells in gill lamellae rapidly increased in response to high external NaCl concentrations. The ultrastructure by scanning electron microscope(SEM) indicated that the gill secondary lamella of tilapia(Oreochromis niloticus) exposed to seawater, were characterized by rough convoluted surfaces during the adaptation. Transmission electron microscopy(TEM) indicated that mitochondria in chloride cells exposed to seawater, were both large and elongate and contained well-developed cristae. TEM also showed the increased chloride cells exposed to seawater. The presence of two mitochondria-rich cell types is discussed with regard to their possible role in the hypoosmoregulatory changes which occur during seawater-adaptation. Most Oreochromis niloticus adapted in seawater had an occasional glomerulus completely filling Bowman's capsule in kidney, and glomerular shrinkage was occurred higher in kidney tissues of individuals living in 10%o, 20%o, 30%o of seawater than in those living in 0%o of freshwater, and blood congestion was occurred severer in kidney tissues of individuals living 20%o, 30%o of seawater than in those living in 10%o of seawater. There were decreases in the glomerular area and the nuclear area in the main segments of the nephron, and that the nuclear areas of the nephron cells in seawater-adapted tilapia were of smaller size than those from freshwater-adapted fish. Our findings demonstrated that Oreochromis niloticus tolerated moderately saline environment and the increased body weight living in 30%o was relatively higher than that living in 10%o in spite of histopathological changes.

  • PDF

Effects of Recipient Oocytes and Electric Stimulation Condition on In Vitro Development of Cloned Embryos after Interspecies Nuclear Transfer with Caprine Somatic Cell (수핵난자와 전기적 융합조건이 산양의 이종간 복제수정란의 체외발달에 미치는 영향)

  • 이명열;박희성
    • Reproductive and Developmental Biology
    • /
    • v.28 no.1
    • /
    • pp.21-27
    • /
    • 2004
  • This study was conducted to investigate the developmental ability of caprine embryos after somatic cell interspecies nuclear transfer. Recipient bovine and porcine oocytes were obtained from slaughterhouse and were matured in vitro according to established protocols. Donor cells were obtained from an ear-skin biopsy of a caprine, digested with 0.25% trypsin-EDTA in PBS and primary fibroblast cultures were established in TCM-199 with 10% FBS. The matured oocytes were dipped in D-PBS plus 10% FBS + 7.5 $\mu$ g/ml cytochalasin B and 0.05M sucrose. Enucleation were accomplished by aspirating the first polar body and partial cytoplasm which containing metaphase II chromosomes using a micropipette with an out diameter of 20∼30 $\mu$m. A Single donor cell was individually transferred into the perivitelline space of each enucleated oocyte. The reconstructed oocytes were electric fusion with 0.3M mannitol fusion medium. After the electrofusion, embryos were activated by electric stimulation. Interspecies nuclear transfer embryos with bovine cytoplasts were cultured in TCM-199 medium supplemented with 10% FBS including bovine oviduct epithelial cells for 7∼9 day. And porcine cytoplasts were cultured in NCSU-23 medium supplemented with 10% FBS for 6 ∼8 day at $39^{\circ}C, 5% CO_2 $in air. Interspecies nuclear transfer by recipient bovine oocytes were fused with electric length 1.95 kv/cm and 2.10 kv/cm. There was no significant difference between two electric length in fusion rate(47.7 and 44.6%) and in cleavage rate(41.9 and 54.5%). Using electric length 1.95 kv/cm and 2.10 kv/cm in caprine-porcine NT oocytes, there was also no significant difference between two treatments in fusion rate(51.3 and 46.1%) and in cleavage rate(75.0 and 84.9%). The caprine-bovine NT oocytes fusion rate was lower(P<0.05) in 1 pulse for 60 $\mu$sec(19.3%), than those from 1 pulse for 30 $\mu$sec(50.8%) and 2 pulse for 30 $\mu$sec(31.0%). The cleavage rate was higher(P<0.05) in 1 pulse for 30 $\mu$sec(53.3%) and 2 pulse for 30 $\mu$sec(50.0%), than in 1 pulse for 60 $\mu$sec(18.2%). The caprine-porcine NT oocytes fusion rate was 48.1% in 1 pulse for 30 $\mu$sec, 45.2% in 2 pulse for 30 $\mu$sec and 48.6% in 1 pulse for 60 $\mu$sec. The cleavage rate was higher(P<0.05) in 1 pulse for 30 $\mu$sec(78.4%) and 1 pulse for 60 $\mu$sec(79.4%), than in 2 pulse for 30 $\mu$sec(53.6%). In caprine-bovine NT embryos, the developmental rate of morula and blastocyst stage embryos were 22.6% in interspecies nuclear transfer and 30.6% in parthenotes, which was no significant differed. The developmental rate of morula and blastocyst stage embryos with caprine-porcine NT embryos were lower(P<0.05) in interspecies nuclear transfer(5.1%) than parthenotes(37.4%).

Correlation analysis of radiation therapy position and dose factors for left breast cancer (좌측 유방암의 방사선치료 자세와 선량인자의 상관관계 분석)

  • Jeon, Jaewan;Park, Cheolwoo;Hong, Jongsu;Jin, Seongjin;Kang, Junghun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.1
    • /
    • pp.37-48
    • /
    • 2017
  • Purpose: The most basic conditions of radiation therapy is to prevent unnecessary exposure of normal tissue. The risk factors that are important o evaluate the dose emitted to the lung and heart from radiation therapy for breast cancer. Therefore, comparing the dose factors of a normal tissue according to the radion treatment position and Seeking an effective radiation treatment for breast cancer through the analysis of the correlation relationship. Materials and Methods: Computed tomography was conducted among 30 patients with left breast cancer in supine and prone position. Eclipse Treatment Planning System (Ver.11) was established by computerized treatment planning. Using the DVH compared the incident dose to normal tissue by position. Based on the result, Using the SPSS (ver.18) analyzed the dose in each normal tissue factors and Through the correlation analysis between variables, independent sample test examined the association. Finally The HI, CI value were compared Using the MIRADA RTx (ver. ad 1.6) in the supine, prone position Results: The results of computerized treatment planning of breast cancer in the supine position were V20, $16.5{\pm}2.6%$ and V30, $13.8{\pm}2.2%$ and Mean dose, $779.1{\pm}135.9cGy$ (absolute value). In the prone position it showed in the order $3.1{\pm}2.2%$, $1.8{\pm}1.7%$, $241.4{\pm}138.3cGy$. The prone position showed overall a lower dose. The average radiation dose 537.7 cGy less was exposured. In the case of heart, it showed that V30, $8.1{\pm}2.6%$ and $5.1{\pm}2.5%$, Mean dose, $594.9{\pm}225.3$ and $408{\pm}183.6cGy$ in the order supine, prone position. Results of statistical analysis, Cronbach's Alpha value of reliability analysis index is 0.563. The results of the correlation analysis between variables, position and dose factors of lung is about 0.89 or more, Which means a high correlation. For the heart, on the other hand it is less correlated to V30 (0.488), mean dose (0.418). Finally The results of independent samples t-test, position and dose factors of lung and heart were significantly higher in both the confidence level of 99 %. Conclusion: Radiation therapy is currently being developed state-of-the-art linear accelerator and a variety of treatment plan technology. The basic premise of the development think normal tissue protection around PTV. Of course, if you treat a breast cancer patient is in the prone position it take a lot of time and reproducibility of set-up problems. Nevertheless, As shown in the experiment results it is possible to reduce the dose to enter the lungs and the heart from the prone position. In conclusion, if a sufficient treatment time in the prone position and place correct confirmation will be more effective when the radiation treatment to patient.

  • PDF

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.