• Title/Summary/Keyword: 적용기준

Search Result 12,526, Processing Time 0.049 seconds

The Content and Risk Assessment of Heavy Metals in Herbal Pills (유통 환제의 유해 중금속 함량 및 위해도 평가)

  • Lee, Sung-Deuk;Lee, Young-Ki;Kim, Moo-Sang;Park, Seok-Ki;Kim, Yeon-Sun;Chae, Young-Zoo
    • Journal of Food Hygiene and Safety
    • /
    • v.27 no.4
    • /
    • pp.375-387
    • /
    • 2012
  • The objective of this study is investigation of contamination levels and assessment of health risk effects of heavy metals in herbal pills. 31 Items and 93 samples were obtained for this investigation from major herbal medicine producing areas, herbal markets and on-line supermarkets from Jan to Jun in 2010. Inductively coupled plasma mass spectrometer method was conducted for the quantitative analysis of Pb, Cd and As. In addition, the mercury analyzer system was conducted for that of Hg without sample digestion. The average contents of heavy metals in samples were as follows : 0.87 mg/kg for Pb, 0.08 mg/kg for Cd, 2.87 mg/kg for As and 0.16 mg/kg for Hg, respectively. In addition, the average contents of heavy metals in different parts of plants, including cortex, fructus, herba, radix, seed, algae and others were 0.63 mg/kg, 3.94 mg/kg, 1.42 mg/kg, 1.05 mg/kg, 0.16 mg/kg, 22.31 mg/kg and 10.17 mg/kg, respectively. After the estimations of dietary exposure, the acceptable daily intake (ADI), the average daily dose (ADD), the provisional tolerable weekly intake (PTWI) and the relative hazard of heavy metals were evaluated. As the results, the relative hazards compared to PTWI in samples were below the recommended standard of JECFA as Pb 3.1%, Cd 0.9%, Hg 0.5%. Cancer risks through slope factor (SF) by Ministry of Environment Republic Korea and Environmental Protection Agency was $4.24{\times}10^{-7}$ for Pb and $3.38{\times}10^{-4}$ for As (assuming that the total arsenic content was equal to the inorganic arsenic). Based on our results, possible Pb-induced cancer risks in herbal pills according to parts used including cortex, fructus, herba, radix, seed, algae and others were $1.95{\times}10^{-7}$, $1.45{\times}10^{-6}$, $2.14{\times}10^{-7}$, $6.27{\times}10^{-7}$, $1.99{\times}10^{-8}$, $3.61{\times}10^{-7}$ and $9.64{\times}10^{-8}$, respectively. Possible As-induced cancer risks in herbal pills by parts used including cortex, fructus, herba, radix, seed, algae and others were $1.54{\times}10^{-5}$, $7.24{\times}10^{-5}$, $1.23{\times}10^{-4}$, $2.02{\times}10^{-5}$, $3.25{\times}10^{-6}$, $2.18{\times}10^{-3}$ and $5.67{\times}10^{-6}$ respectively. Taken together, these results indicate that the majority of samples except for some samples with relative high contents of heavy metals were safe.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Different Look, Different Feel: Social Robot Design Evaluation Model Based on ABOT Attributes and Consumer Emotions (각인각색, 각봇각색: ABOT 속성과 소비자 감성 기반 소셜로봇 디자인평가 모형 개발)

  • Ha, Sangjip;Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.55-78
    • /
    • 2021
  • Tosolve complex and diverse social problems and ensure the quality of life of individuals, social robots that can interact with humans are attracting attention. In the past, robots were recognized as beings that provide labor force as they put into industrial sites on behalf of humans. However, the concept of today's robot has been extended to social robots that coexist with humans and enable social interaction with the advent of Smart technology, which is considered an important driver in most industries. Specifically, there are service robots that respond to customers, the robots that have the purpose of edutainment, and the emotionalrobots that can interact with humans intimately. However, popularization of robots is not felt despite the current information environment in the modern ICT service environment and the 4th industrial revolution. Considering social interaction with users which is an important function of social robots, not only the technology of the robots but also other factors should be considered. The design elements of the robot are more important than other factors tomake consumers purchase essentially a social robot. In fact, existing studies on social robots are at the level of proposing "robot development methodology" or testing the effects provided by social robots to users in pieces. On the other hand, consumer emotions felt from the robot's appearance has an important influence in the process of forming user's perception, reasoning, evaluation and expectation. Furthermore, it can affect attitude toward robots and good feeling and performance reasoning, etc. Therefore, this study aims to verify the effect of appearance of social robot and consumer emotions on consumer's attitude toward social robot. At this time, a social robot design evaluation model is constructed by combining heterogeneous data from different sources. Specifically, the three quantitative indicator data for the appearance of social robots from the ABOT Database is included in the model. The consumer emotions of social robot design has been collected through (1) the existing design evaluation literature and (2) online buzzsuch as product reviews and blogs, (3) qualitative interviews for social robot design. Later, we collected the score of consumer emotions and attitudes toward various social robots through a large-scale consumer survey. First, we have derived the six major dimensions of consumer emotions for 23 pieces of detailed emotions through dimension reduction methodology. Then, statistical analysis was performed to verify the effect of derived consumer emotionson attitude toward social robots. Finally, the moderated regression analysis was performed to verify the effect of quantitatively collected indicators of social robot appearance on the relationship between consumer emotions and attitudes toward social robots. Interestingly, several significant moderation effects were identified, these effects are visualized with two-way interaction effect to interpret them from multidisciplinary perspectives. This study has theoretical contributions from the perspective of empirically verifying all stages from technical properties to consumer's emotion and attitudes toward social robots by linking the data from heterogeneous sources. It has practical significance that the result helps to develop the design guidelines based on consumer emotions in the design stage of social robot development.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Energy and nutrition evaluation per single serving package for each type of home meal replacement rice (가정간편식 밥류의 유형별 1회 제공 포장량 당 에너지 및 영양성분 함량 평가)

  • Choi, In-Young;Yeon, Jee-Young;Kim, Mi-Hyun
    • Journal of Nutrition and Health
    • /
    • v.55 no.4
    • /
    • pp.476-491
    • /
    • 2022
  • Purpose: The purpose of this study was to evaluate the energy and nutrient contents of home meal replacement (HMR) rice products per single serving package based on nutrition labels. Methods: The market research was conducted from February to July 2021 on products sold on the internet, at convenience stores, etc. A total of 406 products were investigated. The products were divided into the following 6 classifications: instant rice (n = 45), cup rice (n = 64), frozen rice (n = 188), rice bowls with toppings (n = 32), gimbap (n = 38), and triangular gimbap (n = 39). Results: The mean packaging weight per serving was the highest in the rice bowl with toppings at 297.1 g, followed by cup rice (264.0 g), frozen rice (239.5 g), gimbap (230.2 g), instant rice (193.4 g), and triangular gimbap (121.6 g) (p < 0.001). The energy per serving package for the rice bowl with toppings was significantly the highest at 496.0 kcal (p < 0.001). The sodium content per serving package of gimbap was the highest at 1,021.8 mg and that of the instant rice was lowest at 37.4 mg (p < 0.001). The price per serving package of the rice bowl with toppings at 4,333.8 won was the highest. The contribution to the daily nutritional value per serving package of all types of HMR rice products surveyed showed an average range of 10-25% for energy, 11-22% for carbohydrates, and 2-51% for sodium. Conclusion: These results indicate the energy and nutrient contents of HMR rice products, vary by type. Therefore, consumers should review the nutrition labeling to select an appropriate HMR rice product based on their intended consumption.

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF

A study of the plan dosimetic evaluation on the rectal cancer treatment (직장암 치료 시 치료계획에 따른 선량평가 연구)

  • Jeong, Hyun Hak;An, Beom Seok;Kim, Dae Il;Lee, Yang Hoon;Lee, Je hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.171-178
    • /
    • 2016
  • Purpose : In order to minimize the dose of femoral head as an appropriate treatment plan for rectal cancer radiation therapy, we compare and evaluate the usefulness of 3-field 3D conformal radiation therapy(below 3fCRT), which is a universal treatment method, and 5-field 3D conformal radiation therapy(below 5fCRT), and Volumetric Modulated Arc Therapy (VMAT). Materials and Methods : The 10 cases of rectal cancer that treated with 21EX were enrolled. Those cases were planned by Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28) and AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). 3fCRT and 5fCRT plan has $0^{\circ}$, $270^{\circ}$, $90^{\circ}$ and $0^{\circ}$, $95^{\circ}$, $45^{\circ}$, $315^{\circ}$, $265^{\circ}$ gantry angle, respectively. VMAT plan parameters consisted of 15MV coplanar $360^{\circ}$ 1 arac. Treatment prescription was employed delivering 54Gy to recum in 30 fractions. To minimize the dose difference that shows up randomly on optimizing, VMAT plans were optimized and calculated twice, and normalized to the target V100%=95%. The indexes of evaluation are D of Both femoral head and aceta fossa, total MU, H.I.(Homogeneity index) and C.I.(Conformity index) of the PTV. All VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : D of Rt. femoral head was 53.08 Gy, 50.27 Gy, and 30.92 Gy, respectively, in the order of 3fCRT, 5fCRT, and VMAT treatment plan. Likewise, Lt. Femoral head showed average 53.68 Gy, 51.01 Gy and 29.23 Gy in the same order. D of Rt. aceta fossa was 54.86 Gy, 52.40 Gy, 30.37 Gy, respectively, in the order of 3fCRT, 5fCRT, and VMAT treatment plan. Likewise, Lt. Femoral head showed average 53.68 Gy, 51.01 Gy and 29.23 Gy in the same order. The maximum dose of both femoral head and aceta fossa was higher in the order of 3fCRT, 5fCRT, and VMAT treatment plan. C.I. showed the lowest VMAT treatment plan with an average of 1.64, 1.48, and 0.99 in the order of 3fCRT, 5fCRT, and VMAT treatment plan. There was no significant difference on H.I. of the PTV among three plans. Total MU showed that the VMAT treatment plan used 124.4MU and 299MU more than the 3fCRT and 5fCRT treatment plan, respectively. IMRT verification gamma test results for the VMAT plan passed over 90.0% at 2mm/2%. Conclusion : In rectal cancer treatment, the VMAT plan was shown to be advantageous in most of the evaluation indexes compared to the 3D plan, and the dose of the femoral head was greatly reduced. However, because of practical limitations there may be a case where it is difficult to select a VMAT treatment plan. 5fCRT has the advantage of reducing the dose of the femoral head as compared to the existing 3fCRT, without regard to additional problems. Therefore, not only would it extend survival time but the quality of life in general, if hospitals improved radiation therapy efficiency by selecting the treatment plan in accordance with the hospital's situation.

  • PDF

The Evaluation of Food Service Menus in an Immigration Detention Center (외국인 보호소 급식 식단 품질에 대한 인식 및 만족도)

  • Kim, Hye-Jin;Kim, Woon Joo;Lee, Young Eun
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.2
    • /
    • pp.286-305
    • /
    • 2013
  • The purpose of this study was to investigate the recognition and satisfaction with the menu quality of food services in an immigration detention center. The survey was conducted from January 22, 2010 to April 22, 2010 by questionnaires. A survey with 265 respondents was conducted and data analyzed by the SAS Program. In analyzing leftovers, the most common was kimchi (37.61%), followed by breads (21.52%), and beans/bean curd (17.99%). The common cause for leftover were undesirable taste (31.84%), sickness or a lack of desire for eating (19.85%). In terms of cooking methods, stir-frying, broiling, and frying were highly preferred to steaming, boiling, and salting. In the analysis of preferences in the taste and satisfaction of food service, there were significant differences in hot, sour, bitter, and light tastes (p<0.05, p<0.01, p<0.001). Satisfaction was low with hot and light tastes, whereas sour and the bitter tastes showed a high degree of satisfaction. In the opinions for quality improvement, most immigrants wanted a tastier food supply (58.69%), a diverse food supply (40.54%), and clean utensils (36.68%). In the analysis of the gap between importance and performance, food taste, variety, and sanitation were recognized as poorly performed, causing major dissatisfaction with the food. The overall satisfaction score was 'average' (3 points out of 5 points) with 3.26 points. The satisfaction score showed insignificant difference depending on religions and duration of stay in Korea, but showed significant differences depending on nationality (p<0.001).

Studies on the Appraisal of Stumpage Value in the Forest Land - With Respect to Kyung-Ju Area - (산원지(山元地) 임목평가(林木平価)에 관(関)한 연구(研究) - 경주지방(慶州地方)을 중심(中心)으로 -)

  • Rha, Sang Soo;Park, Tai Sik
    • Journal of Korean Society of Forest Science
    • /
    • v.52 no.1
    • /
    • pp.37-49
    • /
    • 1981
  • The purpose of the study is to find out the objective method of valuation on the forest stands through the analysis of logging costs that is positively related to timber production. The two forest (Amgog, Whangryoung), located nereby, but forest type, logging and skidding conditions being slightly different, were slected to carry out the study. The objective timber stumpage value were determined by investigating the appropriate timber production costs and profits of logging operations. The main result obtained in this study are as follows: 1. The rate of logging cost in consisting of timber market price is 13.15% in the area of Amgog logging place and 19.48% in Whangryoung. 2. The rate of the other production cost excluding logging cost is 15.36% in the area of Amgog logging place and 28.85% in Whangryoung. 3. The total rate of timber production cost in consisting of the market price is more than 28.51% in the area of Amgog logging place and 48.33% in Whangryoung, 4. Though the productivity of forest land is affected by the selection of tree species, tending, treatments and effective management of forest land, the more important problem is improvement of logging condition. 5. The rate of production cost in timber price is so high that we should endeavore to improve the productivity of labour and its quality, and minimize the difference of piece work per day in accordance to the various site condition. 6. Although the profit of forest industry is related to the period of recapturing investment, it is more closely related to the working condition, risk of investment and continuous change of social investment interest. 7. If the right variables which are related to the timber market, are objectively obtained, the stumpage value of mature forests can be objectively caculated by applying straight line discounting method or compound discounting method in caculating the stump to market price.

  • PDF