• Title/Summary/Keyword: 보고시스템

Search Result 1,902, Processing Time 0.028 seconds

A Study of Waterproofing Evaluation and Effect of UV Protection (UVB/UVA) of Multiple Emulsion Sunblock Cream using Sensory Engeeneering Science (감성공학을 적용한 다중에멀젼 선블록크림의 자외선차단(UVA/B) 효과와 내수성 평가 연구)

  • Kim, In-Young
    • Journal of the Korean Applied Science and Technology
    • /
    • v.37 no.6
    • /
    • pp.1517-1527
    • /
    • 2020
  • This study is about the UV protection effect and water resistance of a multiple emulsion (W/O/W) sunblock cream applied with emotional engineering and reports an actual industrial case. Multiple emulsion system of sunblock cream has the characteristics of changing to a W/O type that is soft and moist when applied, and has excellent water resistance after absorption. Multiple emulsion cream is a highly functional sunblock cream that has both moisture and water resistance. It is a stable milky white cream with a viscosity of 36,000 cps. The organic sunscreen used for the sunscreen was ethylhexylmethoxycinnamate and bisethylhexyloxyphenolmethoxyphenyltriazine. Hexagonal zinc oxide and titanium dioxide that block both UVB and UVA were used. As a result of measuring the UV protection effect by the in-vitro method, the UV protection effect (SPF) is 78.9 for multiple emulsion cream, 76.7 for W/O cream, and 71.3 for O/W cream. It was found that the blocking effect was different. This obtained the highest effect value in the multiple emulsion. As a clinical (in-vivo) result of the UV protection effect, the SPF value representing the UV protection effect of the sunblock cream developed with a multiple emulsion system was 85.7, and the PA-value that blocks the UVA area was 26.5, and ++++. It was found that it has a corresponding high blocking effect. As a result of the water resistance test, the W/O/W formulation had a high waterproofing resistance of 93.8% even after 4 hours, W/O had 75.4%, and O/W had a low water resistance of 25.3%. In the results of the HUT test, it was found in the order of multiple emulsion sun block cream > hydrophilic cream > lipophilic cream. Based on the research results of this multiple emulsion, it is expected to be highly active as a sunblock cream dedicated to outdoor activities by improving the feeling of use, UV protection index, and water resistance. Therefore, in this study, a multiple emulsion system of sunblock cream is developed and has a characteristic that changes to a W/O type that has a soft and moist feeling when applied, and has excellent water resistance after absorption.

Project of Improving Good Agriculture Practice and Income by Intergrated Agricultural Farming (미얀마 우수농산물 재배기술 전수사업)

  • Lee, Young-Cheul;Choi, Dong-Yong
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.16 no.1
    • /
    • pp.193-206
    • /
    • 2014
  • The objectives of the project are to increase farmers' income through GAP and to reduce the loss of agricultural produce, for which the Korean partner takes a role of transferring needed technologies to the project site. To accomplish the project plan, it is set to implement the project with six components: construction of buildings, installation of agricultural facilities, establishment of demonstration farms, dispatching experts, conducting training program in Korea and provision of equipments. The Project Management Committee and the Project Implementation Team are consisted of Korean experts and senior officials from Department of Agriculture, Myanmar that managed the project systematically to ensure the success of the project. The process of the project are; the ceremony of laying the foundation and commencing the construction of training center in April, 2012. The Ribbon Cutting Ceremony for the completion of GAP Training Center was successfully held under PMC (MOAI, GAPI/ARDC) arrangement in SAl, Naypyitaw on June 17, 2012. The Chairman of GAPI, Dr. Sang Mu Lee, Director General U Kyaw Win of DOA, officials and staff members from Korea and Myanmar, teachers and students from SAl attended the ceremony. The team carried out an inspection and fixing donors' plates on donated project machineries, agro-equipments, vehicles, computers and printer, furniture, tools and so forth. Demonstration farm for paddy rice, fruits and vegetables was laid out in April, 2012. Twenty nine Korean rice varieties and many Korean vegetable varieties were introduced into GAP Project farm to check the suitability of the varieties under Myanmar growing conditions. Paddy was cultivated three times in DAR and twice in SAl. In June 2012, vinyl houses were started to be constructed for raising seedlings and finished in December 2012. Fruit orchard for mango, longan and dragon fruit was established in June, 2012. Vegetables were grown until successful harvest and the harvested produce was used for panel testing and distribution in January 2013. Machineries for postharvest handling systems were imported in November 2012. Setting the washing line for vegetables were finished and the system as run for testing in June 2013. New water tanks, pine lines, pump house and electricity were set up in October 2013.

A Case Study on the Effective Liquid Manure Treatment System in Pig Farms (양돈농가의 돈분뇨 액비화 처리 우수사례 실태조사)

  • Kim, Soo-Ryang;Jeon, Sang-Joon;Hong, In-Gi;Kim, Dong-Kyun;Lee, Myung-Gyu
    • Journal of Animal Environmental Science
    • /
    • v.18 no.2
    • /
    • pp.99-110
    • /
    • 2012
  • The purpose of the study is to collect basis data for to establish standard administrative processes of liquid fertilizer treatment. From this survey we could make out the key point of each step through a case of effective liquid manure treatment system in pig house. It is divided into six step; 1. piggery slurry management step, 2. Solid-liquid separation step, 3. liquid fertilizer treatment (aeration) step, 4. liquid fertilizer treatment (microorganism, recirculation and internal return) step, 5. liquid fertilizer treatment (completion) step, 6. land application step. From now on, standardization process of liquid manure treatment technologies need to be develop based on the six steps process.

Facile [11C]PIB Synthesis Using an On-cartridge Methylation and Purification Showed Higher Specific Activity than Conventional Method Using Loop and High Performance Liquid Chromatography Purification (Loop와 HPLC Purification 방법보다 더 높은 비방사능을 보여주는 카트리지 Methylation과 Purification을 이용한 손쉬운 [ 11C]PIB 합성)

  • Lee, Yong-Seok;Cho, Yong-Hyun;Lee, Hong-Jae;Lee, Yun-Sang;Jeong, Jae Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.2
    • /
    • pp.67-73
    • /
    • 2018
  • $[^{11}C]PIB$ synthesis has been performed by a loop-methylation and HPLC purification in our lab. However, this method is time-consuming and requires complicated systems. Thus, we developed an on-cartridge method which simplified the synthetic procedure and reduced time greatly by removing HPLC purification step. We compared 6 different cartridges and evaluated the $[^{11}C]PIB$ production yields and specific activities. $[^{11}C]MeOTf$ was synthesized by using TRACERlab FXC Pro and was transferred into the cartridge by blowing with helium gas for 3 min. To remove byproducts and impurities, cartridges were washed out by 20 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ solution (pH 5.1) and 10 mL of distilled water. And then, $[^{11}C]PIB$ was eluted by 5 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ into the collecting vial containing 10 mL saline. Among the 6 cartridges, only tC18 environmental cartridge could remove impurities and byproducts from $[^{11}C]PIB$ completely and showed higher specific activity than traditional HPLC purification method. This method took only 8 ~ 9 min from methylation to formulation. For the tC18 environmental cartridge and conventional HPLC loop methods, the radiochemical yields were $12.3{\pm}2.2%$ and $13.9{\pm}4.4%$, respectively, and the molar activities were $420.6{\pm}20.4GBq/{\mu}mol$ (n=3) and $78.7{\pm}39.7GBq/{\mu}mol$ (n=41), respectively. We successfully developed a facile on-cartridge methylation method for $[^{11}C]PIB$ synthesis which enabled the procedure more simple and rapid, and showed higher molar radio-activity than HPLC purification method.

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.

A Study on Legal and Regulatory Improvement Direction of Aeronautical Obstacle Management System for Aviation Safety (항공안전을 위한 장애물 제한표면 관리시스템의 법·제도적 개선방향에 관한 소고)

  • Park, Dam-Yong
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.31 no.2
    • /
    • pp.145-176
    • /
    • 2016
  • Aviation safety can be secured through regulations and policies of various areas and thorough execution of them on the field. Recently, for aviation safety management Korea is making efforts to prevent aviation accidents by taking various measures: such as selecting and promoting major strategic goals for each sector; establishing National Aviation Safety Program, including the Second Basic Plan for Aviation Policy; and improving aviation related legislations. Obstacle limitation surface is to be established and publicly notified to ensure safe take-off and landing as well as aviation safety during the circling of aircraft around airports. This study intends to review current aviation obstacle management system which was designed to make sure that buildings and structures do not exceed the height of obstacle limitation surface and identify its operating problems based on my field experience. Also, in this study, I would like to propose ways to improve the system in legal and regulatory aspects. Nowadays, due to the request of residents in the vicinity of airports, discussions and studies on aviational review are being actively carried out. Also, related ordinance and specific procedures will be established soon. However, in addition to this, I would like to propose the ways to improve shortcomings of current system caused by the lack of regulations and legislations for obstacle management. In order to execute obstacle limitation surface regulation, there has to be limits on constructing new buildings, causing real restriction for the residents living in the vicinity of airports on exercising their property rights. In this sense, it is regarded as a sensitive issue since a number of related civil complaints are filed and swift but accurate decision making is required. According to Aviation Act, currently airport operators are handling this task under the cooperation with local governments. Thus, administrative activities of local governments that have the authority to give permits for installation of buildings and structures are critically important. The law requires to carry out precise surveying of vast area and to report the outcome to the government every five years. However, there can be many problems, such as changes in the number of obstacles due to the error in the survey, or failure to apply for consultation with local governments on the exercise of construction permission. However, there is neither standards for allowable errors, preventive measures, nor penalty for the violation of appropriate procedures. As such, only follow-up measures can be taken. Nevertheless, once construction of a building is completed violating the obstacle limitation surface, practically it is difficult to take any measures, including the elimination of the building, because the owner of the building would have been following legal process for the construction by getting permit from the government. In order to address this problem, I believe penalty provision for the violation of Aviation Act needs to be added. Also, it is required to apply the same standards of allowable error stipulated in Building Act to precise surveying in the aviation field. Hence, I would like to propose the ways to improve current system in an effective manner.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The Comparison of Results Among Hepatitis B Test Reagents Using National Standard Substance (국가 표준물질을 이용한 B형 간염 검사 시약 간의 결과 비교)

  • Lee, Young-Ji;Sim, Seong-Jae;Back, Song-Ran;Seo, Mee-Hye;Yoo, Seon-Hee;Cho, Shee-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.203-207
    • /
    • 2010
  • Purpose: Hepatitis B is infection caused by Hepatitis B virus (HBV). Currently, there are several methods, Kits and equipments for conducting Hepatitis B test. Due to ununiformed methods, it would cause some differences. To manage these differences, it needs process evaluating function of test system and reagent using particular standard substance. The aim of this study is to investigate tendency of RIA method's reagent used in Asan Medical Center through comparing several other test reagents using national standard substance. Materials and Methods: The standard substance in National Institute of Food and Drug Safety Evaluation's biology medicine consists of 5 things, 4 antigens and 1 antibody. We tested reagents using A, B company's Kits according to each test method. All tests are measured repeatedly to obtain accurate results. Results: Test result of "HBs Ag Mixed titer Performance panel" is obtained match rate compared S/CO unit standard with RIA method and EIA 3 reagents, CIA 2 reagents is that company A's reagent is 94.4% (17/18), 83.3% (15/18), B is 88.9% (16/18), 77.8% (14/18). Test result of "HBs Ag Low titer Performance panel" is obtain that EIA 2 reagents is shown 7 posive results, CIA 3 reagents is 11, and RIA method's company A's reagent is 3, B is 2 of 13 in low panel. "HBV surface antigen 86.76 IU/vial" tested dilution. A is obtain positive results to 600 times(0.14 IU/mL), B is 300 times (0.29 IU/mL). Case of "HBV human immunoglobulin 95.45 IU/vial", A is shown positive result to 10,000 times (9.5 mIU/mL) and B is 4,000 times (24 mIU/mL). Test result of "HBs Ag Working Standards 0.02~11.52 IU/mL" is shown that Company A's kit concentration level was 0.38IU/mL, company B was 2.23 IU/mL and higher level of concentration was positive results. Conclusion: When comparing various test reagents and RIA method according to National Standard substances for Hepatitis B test, we recognized that there were no significant trends between reagents. For hepatitis B virus antigen-antibody titers even in parts of the test up to 600 times the antigen, antibodies to 10,000 times the maximum positive results could be obtained. Therefore, we confirmed that results from Asan Medical Center are performed smoothly by reagents and system for hepatitis B virus test.

  • PDF

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Quality Assurance for Intensity Modulated Radiation Therapy (세기조절방사선치료(Intensity Modulated Radiation Therapy; IMRT)의 정도보증(Quality Assurance))

  • Cho Byung Chul;Park Suk Won;Oh Do Hoon;Bae Hoonsik
    • Radiation Oncology Journal
    • /
    • v.19 no.3
    • /
    • pp.275-286
    • /
    • 2001
  • Purpose : To setup procedures of quality assurance (OA) for implementing intensity modulated radiation therapy (IMRT) clinically, report OA procedures peformed for one patient with prostate cancer. Materials and methods : $P^3IMRT$ (ADAC) and linear accelerator (Siemens) with multileaf collimator are used to implement IMRT. At first, the positional accuracy, reproducibility of MLC, and leaf transmission factor were evaluated. RTP commissioning was peformed again to consider small field effect. After RTP recommissioning, a test plan of a C-shaped PTV was made using 9 intensity modulated beams, and the calculated isocenter dose was compared with the measured one in solid water phantom. As a patient-specific IMRT QA, one patient with prostate cancer was planned using 6 beams of total 74 segmented fields. The same beams were used to recalculate dose in a solid water phantom. Dose of these beams were measured with a 0.015 cc micro-ionization chamber, a diode detector, films, and an array detector and compared with calculated one. Results : The positioning accuracy of MLC was about 1 mm, and the reproducibility was around 0.5 mm. For leaf transmission factor for 10 MV photon beams, interleaf leakage was measured $1.9\%$ and midleaf leakage $0.9\%$ relative to $10\times\;cm^2$ open filed. Penumbra measured with film, diode detector, microionization chamber, and conventional 0.125 cc chamber showed that $80\~20\%$ penumbra width measured with a 0.125 cc chamber was 2 mm larger than that of film, which means a 0.125 cc ionization chamber was unacceptable for measuring small field such like 0.5 cm beamlet. After RTP recommissioning, the discrepancy between the measured and calculated dose profile for a small field of $1\times1\;cm^2$ size was less than $2\%$. The isocenter dose of the test plan of C-shaped PTV was measured two times with micro-ionization chamber in solid phantom showed that the errors upto $12\%$ for individual beam, but total dose delivered were agreed with the calculated within $2\%$. The transverse dose distribution measured with EC-L film was agreed with the calculated one in general. The isocenter dose for the patient measured in solid phantom was agreed within $1.5\%$. On-axis dose profiles of each individual beam at the position of the central leaf measured with film and array detector were found that at out-of-the-field region, the calculated dose underestimates about $2\%$, at inside-the-field the measured one was agreed within $3\%$, except some position. Conclusion : It is necessary more tight quality control of MLC for IMRT relative to conventional large field treatment and to develop QA procedures to check intensity pattern more efficiently. At the conclusion, we did setup an appropriate QA procedures for IMRT by a series of verifications including the measurement of absolute dose at the isocenter with a micro-ionization chamber, film dosimetry for verifying intensity pattern, and another measurement with an array detector for comparing off-axis dose profile.

  • PDF