• Title/Summary/Keyword: Evaluation parameter

Search Result 1,648, Processing Time 0.038 seconds

The evaluate the usefulness of various CT kernel applications by PET/CT attenuation correction (PET/CT 감쇠보정시 다양한 CT Kernel 적용에 따른 유용성 평가)

  • Lee, Jae-Young;Seong, Yong-Jun;Yoon, Seok-Hwan;Park, Chan-Rok;Lee, Hong-Jae;Noh, Kyung-Wun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.2
    • /
    • pp.37-43
    • /
    • 2017
  • Purpose Recently PET/CT image's attenuation correction is used CTAC(Computed Tomgraphy Attenuation Correction). it can quantitative evaluation by SUV(Standard Uptake Value). This study's purpose is to evaluate SUV and to find proper CT kernel using CTAC with applied various CT kernel to PET/CT construction. Materials and Methods Biograph mCT 64 was used for the equipment. We were performed on 20 patients who had examed at our hospital from february through March 2017. Using NEMA IEC Body Phantom, The data was reconstructed PET/CT images with CTAC appiled various CT kernel. ANOVA was used to evaluated the significant difference in the result. Results The result of measuring the radioactivity concentration of Phantom was B45F 96% and B80F 6.58% against B08F CT kernel, each respectively. the SUVmax increased to B45F 0.86% and B80F 6.54% against B08F CT kernel, In case of patient's parts data, the Lung SUVmax increased to B45F 1.6% and B80F 6.6%, Liver SUVmax increased to B45F 0.7% and B80F 4.7%, and Bone SUVmax increased to B45F 1.3% and B80F 6.2%, respectively. As for parts of patient's about Standard Deviation(SD), the Lung SD increased to B45F 4.2% and B80F 15.4%, Liver SD increased to B45F 2.1% and B80F 11%, and Bone SD increased to B45F 2.3% and B80F 14.7%, respectively. There was no significant difference discovered in three CT kernel (P >.05). Conclusion When using increased noise CT kernel for PET/CT reconstruction, It tends to change both SUVmax and SD in ROI(region of interest), Due to the increase the CT kernel number, Sharp noise increased in ROI. so SUVmax and SD were highly measured, but there was no statistically significant difference. Therefore Using CT kernel of low variation of SD occur less variation of SUV.

  • PDF

A Knowledge-based Approach for the Estimation of Effective Sampling Station Frequencies in Benthic Ecological Assessments (지식기반적 방법을 활용한 저서생태계 평가의 유효 조사정점 개수 산정)

  • Yoo, Jae-Won;Kim, Chang-Soo;Jung, Hoe-In;Lee, Yong-Woo;Lee, Man-Woo;Lee, Chang-Gun;Jin, Sung-Ju;Maeng, Jun-Ho;Hong, Jae-Sang
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.3
    • /
    • pp.147-154
    • /
    • 2011
  • Decision making in Environmental Impact Assessment (EIA) and Consultation on the Coastal Area Utilization (CCAU) is footing on the survey reports, thus requires concrete and accurate information on the natural habitats. In spite of the importance of reporting the ecological quality and status of habitats, the accumulated knowledge and recent techniques in ecology such as the use of investigated cases and indicators/indices have not been utilized in evaluation processes. Even the EIA report does not contain sufficient information required in a decision making process for conservation and development. In addition, for CCAU, sampling efforts were so limited that only two or a few stations were set in most study cases. This hampers transferring key ecological information to both specialist review and decision making processes. Hence, setting the effective number of sampling stations can be said as a prior step for better assessment. We introduced a few statistical techniques to determine the number of sampling stations in macrobenthos surveys. However, the application of the techniques requires a preliminary study that cannot be performed under the current assessment frame. An analysis of the spatial configuration of sampling stations from 19 previous studies was carried out as an alternative approach, based on the assumption that those configurations reported in scientific journal contribute to successful understanding of the ecological phenomena. The distance between stations and number of sampling stations in a $4{\times}4$ km unit area were calculated, and the medians of each parameter were 2.3 km, and 3, respectively. For each study, approximated survey area (ASA, $km^2$) was obtained by using the number of sampling stations in a unit area (NSSU) and total number of sampling stations (TNSS). To predict either appropriate ASA or NSSU/TNSS, we found and suggested statistically significant functional relationship among ASA, survey purpose and NSSU. This empirical approach will contribute to increasing sampling effort in a field survey and communicating with reasonable data and information in EIA and CCAU.

DEVELOPMENT OF SAFETY-BASED LEVEL-OF-SERVICE CRITERIA FOR ISOLATED SIGNALIZED INTERSECTIONS (독립신호 교차로에서의 교통안전을 위한 서비스수준 결정방법의 개발)

  • Dr. Tae-Jun Ha
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.3-32
    • /
    • 1995
  • The Highway Capacity Manual specifies procedures for evaluating intersection performance in terms of delay per vehicle. What is lacking in the current methodology is a comparable quantitative procedure for ass~ssing the safety-based level of service provided to motorists. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections based on the relative hazard of alternative intersection designs and signal timing plans. Conflict opportunity models were developed for those crossing, diverging, and stopping maneuvers which are associated with left-turn and rear-end accidents. Safety¬based level-of-service criteria were then developed based on the distribution of conflict opportunities computed from the developed models. A case study evaluation of the level of service analysis methodology revealed that the developed safety-based criteria were not as sensitive to changes in prevailing traffic, roadway, and signal timing conditions as the traditional delay-based measure. However, the methodology did permit a quantitative assessment of the trade-off between delay reduction and safety improvement. The Highway Capacity Manual (HCM) specifies procedures for evaluating intersection performance in terms of a wide variety of prevailing conditions such as traffic composition, intersection geometry, traffic volumes, and signal timing (1). At the present time, however, performance is only measured in terms of delay per vehicle. This is a parameter which is widely accepted as a meaningful and useful indicator of the efficiency with which an intersection is serving traffic needs. What is lacking in the current methodology is a comparable quantitative procedure for assessing the safety-based level of service provided to motorists. For example, it is well¬known that the change from permissive to protected left-turn phasing can reduce left-turn accident frequency. However, the HCM only permits a quantitative assessment of the impact of this alternative phasing arrangement on vehicle delay. It is left to the engineer or planner to subjectively judge the level of safety benefits, and to evaluate the trade-off between the efficiency and safety consequences of the alternative phasing plans. Numerous examples of other geometric design and signal timing improvements could also be given. At present, the principal methods available to the practitioner for evaluating the relative safety at signalized intersections are: a) the application of engineering judgement, b) accident analyses, and c) traffic conflicts analysis. Reliance on engineering judgement has obvious limitations, especially when placed in the context of the elaborate HCM procedures for calculating delay. Accident analyses generally require some type of before-after comparison, either for the case study intersection or for a large set of similar intersections. In e.ither situation, there are problems associated with compensating for regression-to-the-mean phenomena (2), as well as obtaining an adequate sample size. Research has also pointed to potential bias caused by the way in which exposure to accidents is measured (3, 4). Because of the problems associated with traditional accident analyses, some have promoted the use of tqe traffic conflicts technique (5). However, this procedure also has shortcomings in that it.requires extensive field data collection and trained observers to identify the different types of conflicts occurring in the field. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections that would be compatible and consistent with that presently found in the HCM for evaluating efficiency-based level of service as measured by delay per vehicle (6). The intent was not to develop a new set of accident prediction models, but to design a methodology to quantitatively predict the relative hazard of alternative intersection designs and signal timing plans.

  • PDF

A Study on groundwater and pollutant recharge in urban area: use of hydrochemical data

  • Lee, Ju-Hee;Kwon, Jang-Soon;Yun, Seong-Taek;Chae, Gi-Tak;Park, Seong-Sook
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2004.09a
    • /
    • pp.119-120
    • /
    • 2004
  • Urban groundwater has a unique hydrologic system because of the complex surface and subsurface infrastructures such as deep foundation of many high buildings, subway systems, and sewers and public water supply systems. It generally has been considered that increased surface impermeability reduces the amount of groundwater recharge. On the other hand, leaks from sewers and public water supply systems may generate the large amounts of recharges. All of these urban facilities also may change the groundwater quality by the recharge of a myriad of contaminants. This study was performed to determine the factors controlling the recharge of deep groundwater in an urban area, based on the hydrogeochemical characteristics. The term ‘contamination’ in this study means any kind of inflow of shallow groundwater regardless of clean or contaminated. For this study, urban groundwater samples were collected from a total of 310 preexisting wells with the depth over 100 m. Random sampling method was used to select the wells for this study. Major cations together with Si, Al, Fe, Pb, Hg and Mn were analyzed by ICP-AES, and Cl, N $O_3$, N $H_4$, F, Br, S $O_4$and P $O_4$ were analyzed by IC. There are two groups of groundwater, based on hydrochemical characteristics. The first group is distributed broadly from Ca-HC $O_3$ type to Ca-C1+N $O_3$ type; the other group is the Na+K-HC $O_3$ type. The latter group is considered to represent the baseline quality of deep groundwater in the study area. Using the major ions data for the Na+K-HC $O_3$ type water, we evaluated the extent of groundwater contamination, assuming that if subtract the baseline composition from acquired data for a specific water, the remaining concentrations may indicate the degree of contamination. The remainder of each solute for each sample was simply averaged. The results showed that both Ca and HC $O_3$ represent the typical solutes which are quite enriched in urban groundwater. In particular, the P$CO_2$ values calculated using PHREEQC (version 2.8) showed a correlation with the concentrations of maior inorganic components (Na, Mg, Ca, N $O_3$, S $O_4$, etc.). The p$CO_2$ values for the first group waters widely ranged between about 10$^{-3.0}$ atm to 10$^{-1.0}$ atm and differed from those of the background water samples belonging to the Na+K-HC $O_3$ type (<10$^{-3.5}$ atm). Considering that the p$CO_2$ of soil water (near 10$^{-1.5}$ atm), this indicates that inflow of shallow water is very significant in deep groundwaters in the study area. Furthermore, the P$CO_2$ values can be used as an effective parameter to estimate the relative recharge of shallow water and thus the contamination susceptibility. The results of our present study suggest that down to considerable depth, urban groundwater in crystalline aquifer may be considerably affected by the recharge of shallow water (and pollutants) from an adjacent area. We also suggest that for such evaluation, careful examination of systematically collected hydrochemical data is requisite as an effective tool, in addition to hydrologic and hydrogeologic interpretation.ion.ion.

  • PDF

Assessment of Right Ventricular Function in Patients with Chronic Obstructive Pulmonary Disease Using Echocardiographic Tei Index (만성 폐쇄성 폐질환 환자에서 Tei 지수를 이용한 우심실기능 평가)

  • Oh, Yoon-Jung;Shin, Joon-Han;Kim, Deog-Ki;Choi, Young-Hwa;Park, Kwang-Joo;Hwang, Sung-Chul;Lee, Yi-Hyeong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.50 no.3
    • /
    • pp.343-352
    • /
    • 2001
  • Background : Advanced chronic obstructive pulmonary disease is characterized by progressive pulmonary hypertension leading to right heart dysfunction, which plays a Important role in clinical evaluation but remains difficult and challenging to quantify. The noninvasive doppler echocardiographic value referred to as the Tei index has been suggested as a simple, reproducible and reliable parameter of the right ventricular function. The purpose of this was to assess the right ventricular function in patients with chronic obstructive pulmonary disease using the Tei index and to evaluate its relationship with the pulmonary functional status. Methods : The study population comprised of 26 patients with chronic obstructive pulmonary disease and 10 normal control subjects. The Tei index was obtained by dividing the sum of the isovolumetric contraction and the relaxation times by the ejection time using a pulsed-wave doppler. It was compared with the other available Doppler echocardiographic parameters of systolic or diastolic function and with the pulmonary function of the patients. Results : The Tei indices of the patients with COPD were significantly higher than those of normal subjects($0.45{\pm}0.17$ vs. $0.27{\pm}0.03$, p<0.01). The isovolumetric contraction time/ejection time($0.32{\pm}0.08$ vs. $0.25{\pm}0.05$, p<0.05), the isovolumetric relaxation time/ejection time($0.29{\pm}0.16$ vs. $0.15{\pm}0.08$, p<0.05)and the preejection period/ejection time ($0.46{\pm}0.10$ vs. $0.38{\pm}0.06$, p<0.05) were prolonged and the ejection time ($255.2{\pm}32.6$ vs. $314.2{\pm}16.5$ msec, p<0.05) was significantly shortened in patients with COPD compared to normal subjects. The tei indices were inversely correlated with the $FEV_1$ (r=-0.46, p<0.05) and were prolonged significantly in patients with a severe obstructive ventilatory dysfunction(less than 35% of predicted $FEV_1$) compared to those with a mild and moderate ventilatory dysfunction. The tei indices showed an inverse correlation to with the ejection time (r=-0.469), the isovolumetric contraction time/ejection time(r=0.453), the isovolumetric relaxation time/ejection time(r=0.896) and the preejection period/ejection time(r=0.480). Conclusion : The tei index appeared to be a useful noninvasive means of evaluating the right ventricular function. It revealed a significant correlation with the pulmonary function in patients with COPD.

  • PDF

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Virus Inactivation during the Manufacture of a Collagen Type I from Bovine Hides (소 가죽 유래 Type I Collagen 생산 공정에서 바이러스 불활화)

  • Bae, Jung Eun;Kim, Chan Kyung;Kim, Sungpo;Yang, Eun Kyung;Kim, In Seop
    • Korean Journal of Microbiology
    • /
    • v.48 no.4
    • /
    • pp.314-318
    • /
    • 2012
  • Most types of collagen used for biomedical applications, such as cell therapy and tissue engineering, are derived from animal tissues. Therefore, special precautions must be taken during the production of these proteins in order to assure against the possibility of the products transmitting infectious diseases to the recipients. The ability to remove and/or inactivate known and potential viral contaminants during the manufacturing process is an ever-increasingly important parameter in assessing the safety of biomedical products. The purpose of this study was to evaluate the efficacies of the 70% ethanol treatment and pepsin treatment at pH 2.0 for the inactivation of bovine viruses during the manufacture of collagen type I from bovine hides. A variety of experimental model viruses for bovine viruses including bovine herpes virus (BHV), bovine viral diarrhea virus (BVDV), bovine parainfluenza 3 virus (BPIV-3), and bovine parvovirus (BPV), were chosen for the evaluation of viral inactivation efficacy. BHV, BVDV, BPIV-3, and BPV were effectively inactivated to undetectable levels within 1 h of 70% ethanol treatment for 24 h, with log reduction factors of ${\geq}5.58$, ${\geq}5.32$, ${\geq}5.11$, and ${\geq}3.42$, respectively. BHV, BVDV, BPIV-3, and BPV were also effectively inactivated to undetectable levels within 5 days of pepsin treatment for 14 days, with the log reduction factors of ${\geq}7.08$, ${\geq}6.60$, ${\geq}5.60$, and ${\geq}3.59$, respectively. The cumulative virus reduction factors of BHV, BVDV, BPIV-3, and BPV were ${\geq}12.66$, ${\geq}11.92$, ${\geq}10.71$, and ${\geq}7.01$. These results indicate that the production process for collagen type I from bovine hides has a sufficient virus-reducing capacity to achieve a high margin of virus safety.

Usefulness of F-18 FDG PET/CT in Adrenal Incidentaloma: Differential Diagnosis of Adrenal Metastasis in Oncologic Patients (부신 우연종에서 F-18 FDG PET/CT의 유용성: 악성 종양 환자에서 부신 전이의 감별진단)

  • Lee, Hong-Je;Song, Bong-Il;Kang, Sung-Min;Jeong, Shin-Young;Seo, Ji-Hyoung;Lee, Sang-Woo;Yoo, Jeong-Soo;Ahn, Byeong-Cheol;Lee, Jae-Tae
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.421-428
    • /
    • 2009
  • Purpose: We have evaluated characteristics of adrenal masses incidentally observed in nonenhanced F-18 FDG PET/CT of the oncologic patients and the diagnostic ability of F-18 FDG PET/CT to differentiate malignant from benign adrenal masses. Materials and Methods: Between Mar 2005 and Aug 2008, 75 oncologic patients (46 men, 29 women; mean age, $60.8{\pm}10.2$ years; range, 35-87 years) with 89 adrenal masses incidentally found in PET/CT were enrolled in this study. For quantitative analysis, size (cm), Hounsfield unit (HU), maximum standardized uptake value (SUVmax), SUVratio of all 89 adrenal masses were measured. SUVmax of the adrenal mass divided by SUVliver, which is SUVmax of the segment 8, was defined as SUVratio. The final diagnosis of adrenal masses was based on pathologic confirmation, radiologic evaluation (HU<0 : benign), and clinical decision. Results: Size, HU, SUVmax, and SUVratio were all significantly different between benign and malignant adrenal masses.(P < 0.05) And, SUVratio was the most accurate parameter. A cut-off value of 1.0 for SUVratio provided 90.9% sensitivity and 75.6% specificity. In small adrenal masses (1.5 cm or less), only SUVratio had statistically significant difference between benign and malignant adrenal masses. Similarly a cut-off value of 1.0 for SUVratio provided 80.0% sensitivity and 86.4% specificity. Conclusion: F-18 FDG PET/CT can offer more accurate information with quantitative analysis in differentiating malignant from benign adrenal masses incidentally observed in oncologic patients, compared to nonenhanced CT.

Change of Particle Size of Magnesium Silicate According to Reaction Conditions and Evaluation of Its Polyol Purification Ability (반응 조건에 따른 규산마그네슘의 입도 변화 및 폴리올 정제 능력평가)

  • Yoo, Jhongryul;Jeong, Hongin;Kang, Donggyun;Park, Sungho
    • Korean Chemical Engineering Research
    • /
    • v.58 no.1
    • /
    • pp.84-91
    • /
    • 2020
  • The efficiency of the synthetic magnesium silicate used in basic polyols and edible oil purification is evaluated by its purification ability and filtration rate and is affected by the particle size and surface area of magnesium silicate. In this study, it was investigated the change on the particle size of magnesium silicate was influenced by the reaction temperature, injection rate, injection order (Si, Mg) and Mg/Si reaction mole ratio. The synthesized magnesium silicate was compared and analyzed for the synthesis, grinding, and refining processes. In the synthesis process, the reaction temperature and feed rate did not affect the average particle size change of magnesium silicate, while the reaction molar ratio of Mg / Si and the order of injection acted as main factors for the change of average particle size. The average particle size of magnesium silicate increased by 8.7 ㎛ from 54.4 ㎛ to 63.1 ㎛ at Mg injection when Mg molar ratio increased from 0.125 to 0.500, and increased by about 4.8 ㎛ from 47.3 ㎛ to 52.1 ㎛ at Si injection. The average particle size according to the order of injection was 59.1 ㎛ for Mg injection and 48.4 ㎛ for Si injection and the difference was shown 10.7 ㎛, therefore the filtration rate was about 2 times faster under the condition of Mg injection. That is, as the particle size increases, the filtration time is shortened and washing filtration rate can be increased to improve the productivity of magnesium silicate. The cake form of separated magnesium silicate after filtration becomes a solid through drying process and is used as powdery adsorbent through the grinding process. As the physical strength of the dried magnesium silicate increased, the average particle size of the powder increased and it was confirmed that this strength was affected by the reaction molar ratio. As the reaction molar ratio of Mg / Si increased, the physical strength of magnesium silicate decreased and the average particle size after grinding decreased by about 40% compared to the average particle size after synthesis. This reduction of strength resulted in an improvement of the refining ability due to the decrease of the average particle size and the increase of the amount of fine particle after the pulverization, but it resulted in the decrease of the purification filtration rate. While the molar ratio of Mg/Si was increased from 0.125 to 0.5 at Mg injection, the refining ability increased about 1.3 times, but the purification filtration rate decreased about 1.5 times. Therefore, in order to improve the productivity of magnesium silicate, the reaction molar ratio of Mg / Si should be increased, but in order to increase the purification filtration rate of the polyol, the reaction molar ratio should be decreased. In the synthesis parameters of magnesium silicate, the order of injection and the reaction molar ratio of Mg / Si are important factors affecting the changes in average particle size after synthesis and the changes of particle size after grinding due to the changes of compressive strength, therefore the synthetic parameter is an important thing that determines productivity and refining capacity.

An Analysis of Prognostic Factors in the Uterine Cervical Cancer Patients (자궁경부암 환자의 예후인자에 관한 분석)

  • Yang, Dae-Sik;Yoon, Won-Sub;Kim, Tae-Hyun;Kim, Chul-Yong;Choi, Myung-Sun
    • Radiation Oncology Journal
    • /
    • v.18 no.4
    • /
    • pp.300-308
    • /
    • 2000
  • Purpose :The aim of this study is to analysis of suwival and recurrence rates of the uterine cervical carcinoma patients whom received the radiation therapy respectively. The prognostic factors, such as Papanicolaou (Pap) smear, carcinoembriogenic antigen (CEA) and squamous cell carcinoma (SCC) antigen has been studied. Methods and Materials : From January 1981 to December 1998, eight-hundred twenty-seven uterine carvical cancer patients were treat with radiation therapy. All of the patients were divided into two groups : the radiation therapy only (S2l patients) group and the postoperative radiation therapy (326 patients) group. The age, treatment modality, clinical stage, histopathology, recurrence, follow-up Pap smears, CEA and SCC antigen were used as parameters for the evaluation. The prognostic factors such as survival and recurrence rates were peformed with the Kaplan-Meier method and the Cox hazard model, respectively. Median rollow-up was 38.6 months. Results :On the radiation therapy only group, 314 patients (60$\%$) achieved complete response (CR), 47 patients (9$\%$) showed local recurrence (LR), 78 patients (15$\%$) developed distant metastasis (DM). On the Postoperative radiation therapy group, showed 276 Patients (85$\%$) CR, 8 Patients (2$\%$) LR, 37 Patients (11$\%$) DM. The 5-year survival and recurrence rates was evaluated for all parameters. The statistically significant factors for the survival rate in univariate analysis were clinical stage (p=0.0001), treatment modality (p=0.0010), recurrence (p=0.0001), Pap smear (p=0.0329), CEA (p=0.0001) and SCC antigen (p=0.0001). Conclusion: This study indicated that after treatment, the follow-up studies of Pap smear, CEA and SCC antigen were significant parameter and prediction factors for the survival and recurrence of the uterine cervical carcinoma.

  • PDF