• Title/Summary/Keyword: Optimal time

Search Result 9,435, Processing Time 0.042 seconds

Effect of OPU (Ovum Pick-Up) Duration on the Rate of Collected Ova and In Vitro Produced Blastocyst Formation (OPU(Ovum Pick-Up) 채란기간이 난자 및 수정란 생산에 미치는 영향)

  • Jin, Jong-In;Kwon, Tae-Hyeon;Choi, Byeong-Hyun;Kim, Sung-Soo;Jo, Hyun-Tae;Kong, Il-Keun
    • Journal of Embryo Transfer
    • /
    • v.25 no.1
    • /
    • pp.15-20
    • /
    • 2010
  • This study was performed to identify the optimal timing for oocyte donor replacement during OPU procedure. OPU was carried out to collect oocytes from every donor at an interval of $3{\sim}4$ days (2 times a week). The collected oocytes were matured in vitro in TCM-199 supplemented with 10% FBS, 10 mg/ml of FSH and 1 mg/ml of estradiol for 24 h. After 24 h of exposure to sperm, the presumptive zygotes were cultured in CR1aa medium supplemented with 4 mg/ml of BSA for 3 days before being changed to CR1aa medium with 10% of FBS for another $3{\sim}4$ days. The mean numbers of retrieved oocytes were remained constantly up to 3 months ($6.0{\pm}0.5$, $6.2{\pm}0.7$, $5.2{\pm}0.6$), but significantly decreased at over 4 to 6 months ($3.7{\pm}0.5$, $2.8{\pm}0.4$, $1.2{\pm}0.2$) (p<0.05). The blastocyst development potential was also very similar rate from 1 to 3 months (37.2%, 40.4% and 44.6%), but significantly decreased from 4 to 6 months (24.8%, 29.3% and 28.6%, respectively) (p<0.05). The production of OPU derived embryos in periods of 1 to 3 months ($2.2{\pm}0.3$, $2.5{\pm}0.3$ and $2.3{\pm}0.4$) were significantly higher than those in 4 to 6 months ($0.9{\pm}0.2$, $0.8{\pm}0.2$ and $0.3{\pm}0.2$, respectively) (p<0.05). In conclusion, the efficient periods for the production of OPU derived embryos was until 4 months, twice per week to produce over 64 transferable embryos and then replace new donor after 3 months use. The best replacement time is 3 months and could be maximized production of OPU derived embryos.

Preparation of Pure CO2 Standard Gas from Calcium Carbonate for Stable Isotope Analysis (탄산칼슘을 이용한 이산화탄소 안정동위원소 표준시료 제작에 대한 연구)

  • Park, Mi-Kyung;Park, Sunyoung;Kang, Dong-Jin;Li, Shanlan;Kim, Jae-Yeon;Jo, Chun Ok;Kim, Jooil;Kim, Kyung-Ryul
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.1
    • /
    • pp.40-46
    • /
    • 2013
  • The isotope ratios of $^{13}C/^{12}C$ and $^{18}O/^{16}O$ for a sample in a mass spectrometer are measured relative to those of a pure $CO_2$ reference gas (i.e., laboratory working standard). Thus, the calibration of a laboratory working standard gas to the international isotope scales (Pee Dee Belemnite (PDB) for ${\delta}^{13}C$ and Vienna Standard Mean Ocean Water (V-SMOW) for ${\delta}^{18}O$) is essential for comparisons between data sets obtained by other groups on other mass spectrometers. However, one often finds difficulties in getting well-calibrated standard gases, because of their production time and high price. Additional difficulty is that fractionation processes can occur inside the gas cylinder most likely due to pressure drop in long-term use. Therefore, studies on laboratory production of pure $CO_2$ isotope standard gas from stable solid calcium carbonate standard materials, have been performed. For this study, we propose a method to extract pure $CO_2$ gas without isotope fractionation from a solid calcium carbonate material. The method is similar to that suggested by Coplen et al., (1983), but is better optimized particularly to make a large amount of pure $CO_2$ gas from calcium carbonate material. The $CaCO_3$ releases $CO_2$ in reaction with 100% pure phosphoric acid at $25^{\circ}C$ in a custom designed, evacuated reaction vessel. Here we introduce optimal procedure, reaction conditions, and samples/reactants size for calcium carbonate-phosphoric acid reaction and also provide the details for extracting, purifying and collecting $CO_2$ gas out of the reaction vessel. The measurements for ${\delta}^{18}O$ and ${\delta}^{13}C$ of $CO_2$ were performed at Seoul National University using a stable isotope ratio mass spectrometer (VG Isotech, SIRA Series II) operated in dual-inlet mode. The entire analysis precisions for ${\delta}^{18}O$ and ${\delta}^{13}C$ were evaluated based on the standard deviations of multiple measurements on 15 separate samples of purified $CO_2$. The pure $CO_2$ samples were taken from 100-mg aliquots of a solid calcium carbonate (Solenhofen-ori $CaCO_3$) during 8-day experimental period. The multiple measurements yielded the $1{\sigma}$ precisions of ${\pm}0.01$‰ for ${\delta}^{13}C$ and ${\pm}0.05$‰ for ${\delta}^{18}O$, comparable to the internal instrumental precisions of SIRA. Therefore, we conclude the method proposed in this study can serve as a way to produce an accurate secondary and/or laboratory $CO_2$ standard gas. We hope this study helps resolve difficulties in placing a laboratory working standard onto the international isotope scales and does make accurate comparisons with other data sets from other groups.

The Effect of Application of Cattle Slurry on Dry Matter Yield and Feed Values of Tall Fescue (Festuca arundinacea Schreb.) in Uncultivated Rice Paddy (유휴 논 토양에서 액상 우분뇨의 시용이 톨 페스큐의 건물수량과 사료가치에 미치는 영향)

  • Jo, Ik-Hwan
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.27 no.1
    • /
    • pp.9-20
    • /
    • 2007
  • This experiment was conducted to investigate effects of application of diluted and undiluted cattle slurry with water on seasonal and annual dry matter yields and feed values of tall fescue in the uncultivated rice paddy and it was compared with chemical fertilizer in order to determine optimal application season and dilution level of cattle slurry. When diluted or undiluted cattle slurry with water was applied to uncultivated rice paddy, annual dry matter yields showed 11.31 to 14.81 ton DM/ha (average 13.13 ton DM/ha) for diluted and 10.57 to 12.51 ton DM/ha (average 11.50 ton DM/ha) for undiluted cattle slurries, these had a higher dry matter yield than those of no fertilizer (9.21 ton DM/ha). Furthermore, separate application of early spring and summer (SA plots), separate application of early and late spring, and summer (SUA plots) fur undiluted cattle slurries, and whole application of spring (DS plots), separate application of early spring and summer (DSA plots), separate application of early and late spring, and summer (DSUA plots) for diluted cattle slurries were significantly (P<0.05) higher for annual dry matter yield than no fertilizer plots. Plots applied chemical fertilizer with nitrogen (N), phorphorus (P) and potassium (K) had 15.38 ton DM/ha annually, resulted in significantly (P<0.05) higher DM yield than chemical fertilizer containing P and K, and no fertilizer plots. Moreover, average annual DM yield for the chemical fertilizer with P and K was lower than that of cattle slurry applications. The efinciency of DM production for mineral nitrogen of chemical fertilizers was annually average 31.3 kg DM/kg N. In terms of cutting time of tall fescue, it was lowered in the order of 2nd growth followed by 1st and 3rd growth. However, efficiencies of annual DM production of nitrogen for diluted and undiluted cattle slurries were 26.1 and 15.3 kg DM/kg N, respectively, especially, highest in 2nd growth. While, efficiencies of DM production for cattle slurry versus for mineral nitrogen were 48.9 (undiluted) and 83.4% (diluted), respectively. For annual crude protein (CP) contents of tall fescue, aqueous cattle slurry applications showed 9,9 to 11.6%, which were significantly (P<0.05) higher than no fertilization (9.5%) and chemical fertilizer (9.0 to 9.8%), but annual average NDF and ADF contents were lowest in no fertilization. On the contrary, relative feed value (RFV) and total digestible nutrients (TDN) of no fertilizer plots were significantly (P<0.05) higher than the other plots. The application of cattle slurry and their dilution significantly increased yields of crude protein and total digestible nutrients compared with no and/or P and K fertilizers (P<0.05). These trends were much conspicuous in water-diluted cattle slurries applied in the early and late spring and summer, separately (DSUA plots).

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Performance Evaluation of Siemens CTI ECAT EXACT 47 Scanner Using NEMA NU2-2001 (NEMA NU2-2001을 이용한 Siemens CTI ECAT EXACT 47 스캐너의 표준 성능 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.259-267
    • /
    • 2004
  • Purpose: NEMA NU2-2001 was proposed as a new standard for performance evaluation of whole body PET scanners. in this study, system performance of Siemens CTI ECAT EXACT 47 PET scanner including spatial resolution, sensitivity, scatter fraction, and count rate performance in 2D and 3D mode was evaluated using this new standard method. Methods: ECAT EXACT 47 is a BGO crystal based PET scanner and covers an axial field of view (FOV) of 16.2 cm. Retractable septa allow 2D and 3D data acquisition. All the PET data were acquired according to the NEMA NU2-2001 protocols (coincidence window: 12 ns, energy window: $250{\sim}650$ keV). For the spatial resolution measurement, F-18 point source was placed at the center of the axial FOV((a) x=0, and y=1, (b)x=0, and y=10, (c)x=70, and y=0cm) and a position one fourth of the axial FOV from the center ((a) x=0, and y=1, (b)x=0, and y=10, (c)x=10, and y=0cm). In this case, x and y are transaxial horizontal and vertical, and z is the scanner's axial direction. Images were reconstructed using FBP with ramp filter without any post processing. To measure the system sensitivity, NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves were scanned at the center of transaxial FOV and 10 cm offset from the center. Attenuation free values of sensitivity wire estimated by extrapolating data to the zero wall thickness. NEMA scatter phantom with length of 70 cm was filled with F-18 or C-11solution (2D: 2,900 MBq, 3D: 407 MBq), and coincidence count rates wire measured for 7 half-lives to obtain noise equivalent count rate (MECR) and scatter fraction. We confirmed that dead time loss of the last flame were below 1%. Scatter fraction was estimated by averaging the true to background (staffer+random) ratios of last 3 frames in which the fractions of random rate art negligibly small. Results: Axial and transverse resolutions at 1cm offset from the center were 0.62 and 0.66 cm (FBP in 2D and 3D), and 0.67 and 0.69 cm (FBP in 2D and 3D). Axial, transverse radial, and transverse tangential resolutions at 10cm offset from the center were 0.72 and 0.68 cm (FBP in 2D and 3D), 0.63 and 0.66 cm (FBP in 2D and 3D), and 0.72 and 0.66 cm (FBP in 2D and 3D). Sensitivity values were 708.6 (2D), 2931.3 (3D) counts/sec/MBq at the center and 728.7 (2D, 3398.2 (3D) counts/sec/MBq at 10 cm offset from the center. Scatter fractions were 0.19 (2D) and 0.49 (3D). Peak true count rate and NECR were 64.0 kcps at 40.1 kBq/mL and 49.6 kcps at 40.1 kBq/mL in 2D and 53.7 kcps at 4.76 kBq/mL and 26.4 kcps at 4.47 kBq/mL in 3D. Conclusion: Information about the performance of CTI ECAT EXACT 47 PET scanner reported in this study will be useful for the quantitative analysis of data and determination of optimal image acquisition protocols using this widely used scanner for clinical and research purposes.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Quantitative Analysis of Carbohydrate, Protein, and Oil Contents of Korean Foods Using Near-Infrared Reflectance Spectroscopy (근적외 분광분석법을 이용한 국내 유통 식품 함유 탄수화물, 단백질 및 지방의 정량 분석)

  • Song, Lee-Seul;Kim, Young-Hak;Kim, Gi-Ppeum;Ahn, Kyung-Geun;Hwang, Young-Sun;Kang, In-Kyu;Yoon, Sung-Won;Lee, Junsoo;Shin, Ki-Yong;Lee, Woo-Young;Cho, Young Sook;Choung, Myoung-Gun
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.43 no.3
    • /
    • pp.425-430
    • /
    • 2014
  • Foods contain various nutrients such as carbohydrates, protein, oil, vitamins, and minerals. Among them, carbohydrates, protein, and oil are the main constituents of foods. Usually, these constituents are analyzed by the Kjeldahl and Soxhlet method and so on. However, these analytical methods are complex, costly, and time-consuming. Thus, this study aimed to rapidly and effectively analyze carbohydrate, protein, and oil contents with near-infrared reflectance spectroscopy (NIRS). A total of 517 food samples were measured within the wavelength range of 400 to 2,500 nm. Exactly 412 food calibration samples and 162 validation samples were used for NIRS equation development and validation, respectively. In the NIRS equation of carbohydrates, the most accurate equation was obtained under 1, 4, 5, 1 (1st derivative, 4 nm gap, 5 points smoothing, and 1 point second smoothing) math treatment conditions using the weighted MSC (multiplicative scatter correction) scatter correction method with MPLS (modified partial least square) regression. In the case of protein and oil, the best equation were obtained under 2, 5, 5, 3 and 1, 1, 1, 1 conditions, respectively, using standard MSC and standard normal variate only scatter correction methods with MPLS regression. Calibrations of these NIRS equations showed a very high coefficient of determination in calibration ($R^2$: carbohydrates, 0.971; protein, 0.974; oil, 0.937) and low standard error of calibration (carbohydrates, 4.066; protein, 1.080; oil, 1.890). Optimal equation conditions were applied to a validation set of 162 samples. Validation results of these NIRS equations showed a very high coefficient of determination in prediction ($r^2$: carbohydrates, 0.987; protein, 0.970; oil, 0.947) and low standard error of prediction (carbohydrates, 2.515; protein, 1.144; oil, 1.370). Therefore, these NIRS equations can be applicable for determination of carbohydrates, proteins, and oil contents in various foods.

Results of Radiation Therapy for Carcinoma of the Uterine Cervix (자궁경부암의 방사선치료 성적)

  • Lee Kyung-Ja
    • Radiation Oncology Journal
    • /
    • v.13 no.4
    • /
    • pp.359-368
    • /
    • 1995
  • Purpose : This is a retrospective analysis for pattern of failure, survival rate and prognostic factors of 114 patients with histologically proven invasive cancer of the uterine cervix treated with definitive irradiation. Materials and Methods : One hundred fourteen patients with invasive carcinoma of the cervix were treated with a combination of intracavitary irradiation using Fletcher-Suit applicator and external beam irradiation by 6MV X-ray at the Ewha Womans University Hospital between March 1982 and Mar 1990. The median age was 53 years(range:30-77 years). FIGO stage distribution was 19 for IB, 23 for IIA, 42 for IIB, 12 for IIIA and 18 for IIIB. Summation dose of external beam and intracavitary irradiation to point A was 80-90 Gy(median:8580 cGy) in early stage(IB-IIA) and 85-100 Gy(median:8850 cGy) in advanced stage(IIB-IIIB). Kaplan-Meier method was used to estimate the survival rate and multivariate analysis for progrostic factors was performed using the Log likelihood for Weibull Results : The pelvic failure rates by stage were $10.5{\%}$ for IB. $8.7{\%}$ for IIA, $23.8{\%}$ for IIB, $50.0{\%}$ for IIIA and $38.9{\%}$ for IIIB. The rate of distant metastasis by stage were $0{\%}$ for IB, $8.7{\%}$ for IIA, $4.8{\%}$ for IIB. $0{\%}$ for IIIA and $11.1{\%}$ for IIIB. The time of failure was from 3 to 50 months and with median of 15 months after completion of radiation therapy. There was no significant coorelation between dose to point A($\leq$90 Gy vs >90 Gy) and pelvic tumor control(P>0.05). Incidence rates of grade 2 rectal and bladder complications were $3.5{\%}$(4/114) and $7{\%}$(8/114), respectively and 1 patient had sigmoid colon obstruction and 1 patient had severe cystitis. Overall 5-year survival rate was $70.5{\%}$ and disease-free survival rate was $53.6{\%}$. Overall 5-year survival rate by stage was $100{\%}$ for IB, $76.9{\%}$ for IIA, $77.6{\%}$ for IIB $87.5{\%}$ for IIIA and $69.1{\%}$ for IIIB. Five-rear disease-free survival rate by stage was $81.3{\%}$ for IB, $67.9{\%}$ for IIA, $46.8{\%}$ for IIB, $45.4{\%}$ for IIIA and $34.4{\%}$ for IIIB. The prognostic factors for disease-free survival rate by multivariate analysis was performance status(p= 0.0063) and response rate after completion of radiation therapy(p= 0.0026) but stage, age and radiation dose to point A were not siginificant. Conclusion : The result of radiation therapy for early stage of the uterine cervix cancer was relatively good but local control rate and survival rate in advanced stage were poor inspite of high dose irradiation to point A above 90 Gy. Prospective randomized studies are recommended to establish optimal tumor doses for various stages and volume of carcinoma of uterine cervix, And ajuvant chemotherapy or radiation-sensitizing agents must be considered to increase the pelvic control and survival rate in advanced cancer of uterine cervix.

  • PDF

Protoplast Fusion of Nicotiana glauca and Solanum tuberosum Using Selectable Marker Genes (표식유전자를 이용한 담배와 감자의 원형질체 융합)

  • Park, Tae-Eun;Chung, Hae-Joun
    • The Journal of Natural Sciences
    • /
    • v.4
    • /
    • pp.103-142
    • /
    • 1991
  • These studies were carried out to select somatic hybrid using selectable marker genes of Nicotiana glauca transformed by NPTII gene and Solanum tuberosum transformed by T- DNA, and to study characteristics of transformant. The results are summarized as follows. 1. Crown gall tumors and hairy roots were formed on potato tuber disc infected by A. tumefaciens Ach5 and A. rhizogenes ATCC15834. These tumors and roots could be grown on the phytohormone free media. 2. Callus formation from hairy root was prompted on the medium containing 2, 4 D 2mg/I with casein hydrolysate lg/l. 3. The survival ratio of crown gall tumor callus derived from potato increased on the medium containing the activated charcoal 0. 5-2. 0mg/I because of the preventions on the other hand, hairy roots were necrosis on the same medium. 4. Callus derived from hairy root were excellently grown for a short time by suspension culture on liquid medium containing 2, 4-D 2mg/I and casein hydrolysate lg/l. 5. The binary vector pGA643 was mobilized from E. coli MC1000 into wild type Agrobacteriurn tumefaciens Ach5, A. tumefaciens $A_4T$ and disarmed A. tuniefaciens LBA4404 using a triparental mating method with E. ccli HB1O1/pRK2013. Transconjugants were obtained on the minimal media containing tetracycline and kanamycin. pGA643 vectors were confirmed by electrophoresis on 0.7% agarose gel. 6. Kanamycin resistant calli were selected on the media supplemented with 2, 4-D 0.5mg/1 and kanamycin $100\mug$/ml after co- cultivating with tobacco stem explants and A. tumefaciens LBA4404/pGA643, and selected calli propagated on the same medium. 7. The multiple shoots were regenerated from kanamycin resistant calli on the MS medium containing BA 2mg/l. 8. Leaf segments of transformed shoot were able to grow vigorusly on the medium supplemented with high concentration of kanamycin $1000\mug$/ml. 9. Kanamycin resistant shoots were rooting and elongated on medium containing kanamycin $100\mug$/ml, but normal shoot were not. 10. For the production of protoplast from potato calli transformed by T-DNA and mesophyll tissue transformed by NPTII gene, the former was isolated in the enzyme mixture of 2.0% celluase Onozuka R-10, 1.0% dricelase, 1.0% macerozyme. and 0.5M mannitol, the latter was isolated in the enzyme mixture 1.0% Celluase Onozuka R-10, 0.3% macerozyme, and 0.7M mannitol. 11. The optimal concentrationn of mannitol in the enzyme mixture for high protoplast yield was 0.8M at both transformed tobacco mesophyll and potato callus. The viabilities of protoplast were shown above 90%, respectively. 12. Both tobacco mesophyll and potato callus protoplasts were fused by using PEG solution. Cell walls were regenerated on hormone free media supplemented with kanamycin after 5 days, and colonies were observed after 4 weeks culture.

  • PDF