• Title/Summary/Keyword: conservative conditions

Search Result 388, Processing Time 0.024 seconds

Maximum Value Calculation of High Dose Radioiodine Therapy Room (고용량 방사성옥소 치료 병실의 최대치 산출)

  • Lee, Kyung-Jae;Cho, Hyun-Duck;Ko, Kil-Man;Park, Young-Jae;Lee, In-Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.28-34
    • /
    • 2010
  • Purpose: According to increment of thyroid cancer recently, patients of high dose radioiodine therapy were accumulated. Taking into consideration the acceptance capability in the current facility, this study is to calculate the maximum value of high dose radioiodine therapy in patients for treatment. Materials and Methods: The amount and radioactivity of waste water discharged from high dose radioiodine therapy in patients admitted at present hospital as well as the radiation density of the air released into the atmosphere from the high dose radioiodine therapy ward were measured. When the calculated waste water's radiation and its density in the released air satisfies the standard (management standard for discharge into water supply 30 Bq/L, management standard for release into air 3 $Bq/m^3$) set by the Ministry of Education, Science and Technology, the maximum value of treatable high dose radioiodine therapy in patients was calculated. Results: When we calculated in a conservative view, the average density of radiation of waste water discharged from treating high dose radioiodine therapy one patient was 8 MBq/L and after 117 days of diminution in the water-purifier tank, it was 29.5 Bq/L. Also, the average density of radiation of waste water discharged from treating high dose radioiodine therapy two patients was 16 MBq/L and after 70 days of diminution in the water-purifier tank, it was 29.7 Bq/L. Under the same conditions, the density of radiation released into air through RI Ventilation Filter from the radioiodine therapy ward was 0.38 $Bq/m^3$. Conclusion: The maximum value of high dose radioiodine therapy in patients that can be treated within the acceptance capability was calculated and applied to the current facility, and if double rooms are managed by improving the ward structure, it would be possible to reduce the accumulated treatment waiting period for radioiodine therapy in patients.

  • PDF

AN EXPERIMENTAL STUDY ON THE EFFECT OF Ca(OH)2 UPON THE HEALING PROCESS OF THE PULP AND PERIAPICAL TISSUE IN THE DOGS' TEETH (수산화칼슘이 손상치수조직 및 치근조직의 치유에 미치는 영향에 관한 연구)

  • Lim, S.S.;Yoon, S.H.;Lee, C.S.;Lee, M.J.;Kim, Y.H.;Kwon, H.C.;Um, C.M.
    • Restorative Dentistry and Endodontics
    • /
    • v.8 no.1
    • /
    • pp.123-131
    • /
    • 1982
  • The purpose of this study was to observe the responses of the remaining pulp tissue after pulpotomy upon the several kinds of $Ca(OH)_2$ products and the responses of periapical tissue upon some root canal filling materials after extirpation. For pulpotomy, the class V cavities were prepared on the premolars, molars and upper canines, and the pulp was amputated. Each drug was placed over the amputated tissue and cavity was sealed with zinc oxide eugenol cement. The drugs which were used for the study were Dycal (Caulk Co. U.S.A.), Cavitec (Kerr Co. U.S.A.), Calvital, Nobudyne and Neodyne (Neo Dental Chemical Products). For extirpation, the endodontic cavities were prepared on the lingual surfaces of anterior teeth, and the pulp tissues were extirpated as routine method. After enlarging, irrigation, and measuring of root length by taking X-ray, each root canal filling material was filled in the canal with gutta percha cone, and endodontic cavity was sealed with zinc oxide eugenol cement. Zinc oxide eugenol, $Ca(OH)_2$ (Eli Lilly Co. U.S.A.) and Vitapex (Neo Dental Chemical Products) were used as root canal filling materials. Animals were sacrificed after 1, 3 and 6 weeks following the operation. The teeth were decalcified in formic acid, sectioned and stained with hematoxylin eosin. Microscopic examination revealed as follows. 1. Dycal: The dentin bridge formation was observed at the 3rd week after pulpotomy. Inflammatory conditions which were infiltration of inflammatory cells and dilatation of blood vessels were kept in remaining pulp tissue at the 6th week. 2. Calvital: The dentin bridge was observed at the 1st week after pulpotomy. As the time clasped, the pulp tended to be the fibrous degeneration. 3. Cavitec, Nobudyne and Neodyne: In the case of Cavitec and Nobudyne, the incompleted and irregular dentin bridge was observed at the 6th week, and in Neodyne, was observed at the 3rd week. The severe inflammatory changes were seen in the remaining pulp tissue. As the time clasped, the fibrous degeneration tended to spread in the remaining pulp tissue. 4. $Ca(OH)_2$: Osteocementum was formed at the 3rd week, the matrix of cementum and dentin were resorted, and infiltration of lymphocytes was seen in periapical tissue when $Ca(OH)_2$ was used as canal-filling materials. S. ZOE and Vitapex The cementum like substance was seen in periapical portion at the 1st week, when ZOE and Vitapex were used as root canal filling materials. As the time elapsed, the matrix of cementum and dentin tended to be resorted. At the 6th week, the inflammatory condition of periapical tissue was continued in the case of ZOE, but was reduced in the case of Vitapex.

  • PDF

The Achievements and limitations of the U. S. Welfare Reform (미국 복지개혁의 성과와 한계)

  • Kim, Hwan-Joon
    • Korean Journal of Social Welfare
    • /
    • v.53
    • /
    • pp.129-153
    • /
    • 2003
  • This study examines the socio-economic impacts of recent welfare reform in the United States. Based on the neo-conservative critique to the traditional public assistance system for low-income families, the 1996 welfare reform has given greater emphases on reducing welfare dependency and increasing work effort and self-sufficiency among welfare recipients. In particular, the welfare reform legislation instituted 60-month lifetime limits on cash assistance, expanded mandatory work requirements, and placed financial penalties for noncompliance. With the well-timed economic boom in the second half of the 1990s, the welfare reform seems to achieve considerable progress; welfare caseload has declined sharply to reach less than 50% of its 1994 peak, single mothers' labor force participation has increased substantially, and child poverty has decreased. In spite of these good signals, the welfare reform also has several potential problems. Many welfare leavers participate in the labor market, but not all (or most) of them. The economic well being of working welfare leavers did not increased significantly, because earnings increase was canceled out by parallel decrease in welfare benefits. Furthermore, most of working welfare leavers are employed in jobs with poor employment stability and low wages, making them highly vulnerable to frequent layoff, long-time joblessness, persistent poverty, and welfare recidivism. Another serious problem of the welfare reform is that a substantial number of welfare recipients are faced with extreme difficulties in finding jobs, because they have severe barriers to employment. The new welfare system with 5-year time limit can severely threaten the livelihoods of these people. The welfare reform presupposes that welfare recipients can achieve self-reliance by increasing their labor market activities. However, empirical evidences suggest that many people are unable to respond to the new, work-oriented welfare strategy. It may be a very difficult task to achieve both objectives of the welfare reform((1) providing adequate income security for low-income families and (2) promoting self-sufficiency) at the same time, because sometimes they are conflicting each other. With this in mind, a possible solution can be to distinguish welfare recipients into "(Very)-Hard-to-Employ" group and "(Relatively)-Ready-to-Work" group, based on elaborate examinations of a wide range of personal conditions. For the former group, the primary objective of welfare policies should be the first one(providing income security). For the "Ready-to-Work" group, follow-up services to promote job retention and advancement, as well as skill-training and job-search services, are very important. The U. S. experiences of the welfare reform provide some useful implications for newly developing Korean public assistance policies for the able-bodied low-income population.

  • PDF

Operator exposure risk assessment of benzimidazole fungicides on Korean agricultural condition (Benzimidazole계 살균제의 농작업자 위해성평가)

  • Lee, Je-Bong;Shin, Jin-Sup;Jeong, Mi-Hye;Park, Yeon-Ki;Im, Geon-Jae;Kang, Kyu-Young
    • The Korean Journal of Pesticide Science
    • /
    • v.9 no.4
    • /
    • pp.347-353
    • /
    • 2005
  • Pesticide risk assessment for pesticide operators as well as for consumers has become one of the pesticide regulatory tools to reduce any unreasonable adverse health effects from pesticide use. The risk for pesticide operators can be quantified by comparing the acceptable operator exposure level(AOEL) with exposure level during pesticide application. This study is to evaluate the risk of benzimidazole fungicides application worker. The exposure level of pesticide applicators were calculated using Japanese operator exposure study tested with EPN 45% EC. The AOELs for pesticides were obtained dividing relevant lowest no observed abuse effect levels(NOAELs) for the exposure scenario into uncertainty factor, 100. For the non-cancer and cancer occupational risk assessment, $Q_1^*$ produced by US/EPA and life time average daily dose(LADD) calculated from average daily dose(ADD), treatment days per year, worked years for life time were used. Operator exposure for benzimidazole fungicides application were benomyl 0.2, carbendazim 0.36 and thiophanate-methyl 0.42 mg/kg/day. Short-term AOELs for benomyl, carbendazim and thiophanate-methyl were 0.3, 0.1, and 0.2 mg/kg/day, and long-term AOEL were 0.025, 0.025, 0.08 mg/kg/day, respectively. LADDs were benomyl 0.0038, carbendazim 0.0067, thiophanate-methyl 0.0081 mg/kg/day. The ratios of exposure to AOEL were $0.28{\sim}1.5$ for short-term and $3.73{\sim}9.88$ for long-term. Cancer risk for operator were $9.12{\times}10^{-6}$ for benomyl, $1.61{\times}10^{-5}$ for carbendazim and $1.13{\times}10^{-4}$ for thiophanate-methyl by the standard application scenario. The result showed 3 fungicides exceed the risk criteria, $1.0{\times}10^{-6}$. The above risk assessments were based upon conservative assumptions and therefore are believed to be protective of the applicator. To refine the risk at the more actual conditions, further risk assessment with more realistic data would be needed.

Life-time Prediction of a FKM O-ring using Intermittent Compression Stress Relaxation (CSR) and Time-temperature Superposition (TTS) Principle (간헐 압축응력 완화와 시간-온도 중첩 원리를 이용한 FKM 오링의 수명 예측 연구)

  • Lee, Jin-Hyok;Bae, Jong-Woo;Kim, Jung-Su;Hwang, Tae-Jun;Park, Sung-Doo;Park, Sung-Han;Min, Yeo-Tae;Kim, Won-Ho;Jo, Nam-Ju
    • Elastomers and Composites
    • /
    • v.45 no.4
    • /
    • pp.263-271
    • /
    • 2010
  • Intermittent CSR testing was used to investigate the degradation of an FKM O-ring, also the prediction of its life-time. An intermittent CSR jig was designed taking into consideration the O-ring's environment under use. The testing allowed observation of the effects of friction, heat loss, and stress relaxation by the Mullins effect. Degradation of O-rings by thermal aging was observed between 60 and $160^{\circ}C$. In the high temperature of range ($100-160^{\circ}C$) O-rings showed linear degradation behavior and satisfied the Arrhenius relationship. The activation energy was about 60.2 kJ/mol. From Arrhenius plots, predicted life-times were 43.3 years and 69.9 years for 50% and 40% failure conditions, respectively. Based on TTS (time-temperature superposition) principle, degradation was observed at $60^{\circ}C$, and could save testing time. Between 60 and $100^{\circ}C$ the activation energy decreased to 48.3 kJ/mol. WLF(William-Landel-Ferry) plot confirmed that O-rings show non-linear degradation behavior under $80^{\circ}C$. The life-time of O-rings predicted by TTS principle was 19.1 years and 25.2 years for each failure condition. The life-time predicted by TTS principle is more conservative than that from the Arrhenius relationship.

An Evaluation of Allowable Bearing Capacity of Weathered Rock by Large-Scale Plate-Bearing Test and Numerical Analysis (대형평판재하시험 및 수치해석에 의한 풍화암 허용지지력 평가)

  • Hong, Seung-Hyeun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.10
    • /
    • pp.61-74
    • /
    • 2022
  • Considering that the number of cases in which a structure foundation is located on weathered rock has been increasing recently, for adequate design bearing capacity of a foundation on weathered rock, allowable bearing capacities of such foundations in geotechnical investigation reports were studied. With reference to the study results, the allowable bearing capacity of a foundation on weathered rock was approximately 400-700 kN/m2, with a large variation, and was considered a conservative value. Because the allowable bearing capacity of the foundation ground is an important index in determining the foundation type in the early design stage, it can have a significant influence on the construction cost and period according to the initial decision. Thus, in this study, six large-scale plate-bearing tests were conducted on weathered rock, and the bearing capacity and settlement characteristics were analyzed. According to the test results, the bearing capacities from the six tests exceeded 1,500 kN/m2, and it shows that the results are similar with the one of bearing capacity formula by Pressuremeter tests when compared with the various bearing capacity formula. In addition, the elastic modulus determined by the inverse calculation of the load-settlement behavior from the large-scale plate-bearing tests was appropriate for applying the elastic modulus of the Pressuremeter tests. With consideration of the large-scale plate-bearing tests in this study and other results of plate-bearing tests on weathered rock in Korea, the allowable bearing capacity of weathered rock is evaluated to be over 1,000 kN/m2. However, because the settlement of the foundation increases as the foundation size increases, the allowable bearing capacity should be restrained by the allowable settlement criteria of an upper structure. Therefore, in this study, the anticipated foundation settlements along the foundation size and the thickness of weathered rocks have been evaluated by numerical analysis, and the foundation size and ground conditions, with an allowable bearing capacity of over 1,000 kN/m2, have been proposed as a table. These findings are considered useful in determining the foundation type in the early foundation design.

Ecological Characteristics of Benthic Macroinvertebrates according to Stream Order and Habitat - Focused on the Ecological Landscape Conservation Area - (하천 규모와 서식지에 따른 저서성 대형무척추동물의 생태특성 - 생태·경관보전 지역을 중심으로 -)

  • Hwang, In Chul;Kwon, Soon Jik;Park, Young Jun;Park, Jin Young
    • Journal of Wetlands Research
    • /
    • v.24 no.3
    • /
    • pp.185-195
    • /
    • 2022
  • This study conducted a survey over spring and autumn from 2014 to 2020 to confirm the ecological characteristics of the size of streams and habitats, centering on the ecological landscape conservation area, and a total 256 species of benthic macroinvertebrates in 105 families, 25 orders, 8 classes, and 5 phyla appeared. In terms of appearance species, by region, the rate of appearance of Ephemeroptera and Trichoptera was high in regions consisting of lotic area and the rate of appearance of Coleoptera and Odonata was high in regions consisting of lentic areas. When comparing the population of Ephemeroptera-Plecoptera-Trichoptera (EPT) groups by region, they were classified into three groups: upstream area, mainstream area, and lentic areas, and it was confirmed that the population ratio of EPT changed as it moved from upstream to downstream. As the stream order increased, the number of species and populations increased. The Shredder group (SH) tended to decrease as the size of stream increased(r=0.9925), and the Collector-Filtering (CF) tended to increase as the size of stream increased(r=0.9319). It was confirmed that the Scraper (SC) replaced each other between species with the same ecological status as it went downstream from upstream, and it is thought that the SC did not differ significantly by stream order. In order to maintain a healthy ecosystem in the designation and management of ecological landscape conservation areas, it is necessary to consider ecological factors such as competition and physico-chemistry factors such as water quality and substrate conditions. Therefore, if the competent authority designated survey areas including buffer areas that include streams and physical habitats of various sizes, it will be advantageous to the conservative area and securing more biological resources.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.