• Title/Summary/Keyword: Complex Number

Search Result 3,057, Processing Time 0.029 seconds

Contaminant Mechanism and Management of Tracksite of Pterosaurs, Birds, and Dinosaurs in Chungmugong-dong, Jinju, Korea (천연기념물 진주 충무공동 익룡·새·공룡발자국 화석산지의 오염물 형성 메커니즘과 관리방안)

  • Myoungju Choie;Sangho Won;Tea Jong Lee;Seong-Joo Lee;Dal-Yong Kong;Myeong Seong Lee
    • Economic and Environmental Geology
    • /
    • v.56 no.6
    • /
    • pp.715-728
    • /
    • 2023
  • Tracksite of pterosaurs, birds, and dinosaurs in Chungmugong-dong in Jinju was designated as a natural monument in 2011 and is known as the world's largest in terms of the number and density of pterosaur footprints. This site has been managed by installing protection buildings to conserve in 2018. About 17% of the footprints of pterosaur, theropod, and ornithopod in this site under management in the 2nd protection building are of great academic value, but observation of footprints has difficulties due to continuous physical and chemical damage. In particular, the accumulation of milk-white contaminants is formed by the gypsum and air pollutant complex. Gypsum remains evaporated with a plate or columnar shape in the process of water circulation around the 2nd protection building, and the dust is from through the inflow of the gallery windows. The aqueous solution of gypsum, consisting of calcium from the lower bed and sulfur from grass growth, is catchmented into the groundwater from the area behind the protection building. Pollen and a few minerals other constituents of contaminants, go through the gallery window, which makes it difficult to expel dust. To conserve the fossil-bearing beds from two contaminants of different origins, controlling the water and atmospheric circulation of the 2nd protection building and removing the contaminants continuously is necessary. When cleaning contaminants, the steam cleaning method is sufficiently effective for powder-shaped milk-white contaminants. The fossil-bearing bed consists of dark gray shale with high laser absorption power; the laser cleaning method accompanies physical loss to fossils and sedimentary structures; therefore, avoiding it as much as possible is desirable.

Analyzing the Socio-Ecological System of Bees to Suggest Strategies for Green Space Planning to Promote Urban Beekeeping (꿀벌의 사회생태시스템 분석을 통한 도시 양봉 활성화 녹지 계획 전략 제시)

  • Choi, Hojun;Kim, Min;Chon, Jinhyung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.52 no.1
    • /
    • pp.46-58
    • /
    • 2024
  • Pollinators are organisms that carry out the pollination process of plants and include Hymenoptera, Lepidoptera, Diptera, and Coleoptera. Among them, bees not only pollinate plants but also improve urban green spaces damaged by land use changes, providing a habitat and food for birds and insects. Today, however, the number of pollinating plants is decreasing due to issues such as early flowering due to climate change, fragmentation of green spaces due to urbanization, and pesticide use, which in turn leads to a decline in bee populations. The decline of bee populations directly translates into problems, such as reduced biodiversity in cities and decreased food production. Urban beekeeping has been proposed as a strategy to address the decline of bee populations. However, there is a problem asurban beekeeping strategies are proposed without considering the complex structure of the socio-ecological system consisting of bees foraging and pollination activities and are therefore unsustainable. Therefore, this study aims to analyze the socio-ecological system of honeybees, which are pollinators, structurally using system thinking and propose a green space planning strategy to revitalize urban beekeeping. For this study, previous studies that centered on the social and ecological system of bees in cities were collected and reviewed to establish the system area and derive the main variables for creating a causal loop diagram. Second, the ecological structure of bees' foraging and pollination activities and the structure of bees' ecological system in the city were analyzed, as was the social-ecological system structure of urban beekeeping by creating an individual causal loop diagram. Finally, the socio-ecological system structure of honey bees was analyzed from a holistic perspective through the creation of an integrated causal loop diagram. Citizen participation programs, local government investment, and the creation of urban parks and green spaces in idle spaces were suggestedas green space planning strategies to revitalize urban beekeeping. The results of this study differ from previous studies in that the ecological structure of bees and the social structure of urban beekeeping were analyzed from a holistic perspective using systems thinking to propose strategies, policy recommendations, and implications for introducing sustainable urban beekeeping.

Analysis of the Case of Separation of Mixtures Presented in the 2015 Revised Elementary School Science 4th Grade Authorized Textbook and Comparison of the Concept of Separation of Mixtures between Teachers and Students (2015 개정 초등학교 과학과 4학년 검정 교과서에 제시된 혼합물의 분리 사례 분석 및 교사와 학생의 혼합물 개념 비교)

  • Chae, Heein;Noh, Sukgoo
    • Journal of Korean Elementary Science Education
    • /
    • v.43 no.1
    • /
    • pp.122-135
    • /
    • 2024
  • The purpose of this study was to analyze the examples presented in the "Separation of Mixtures" section of the 2015 revised science authorized textbook introduced in elementary schools in 2022 and to see how the teachers and students understand the concept. To do that, 96 keywords were extracted through three cleansing processes to separate the elements of the mixture presented in the textbook. In order to analyze the teachers' perceptions, 32 teachers at elementary schools in Gyeonggi-do received responses to a survey, and a survey of 92 fourth graders who learned the separation of the mixture with an authorized textbook in 2022 was used for the analysis. As a result, as for the solids, 54 out of 96 separations (56.3%) showed the highest ratio, and the largest number of cases were presented according to the characteristics of the development stage of students. It was followed by living things, liquids, other objects and substances, and gasses. By analyzing the mixture, the structure and the interrelationships between the 96 extracted keywords were systematized through the network analysis, and the connection between the keywords, which were a part of the mixture was analyzed. The teachers partially responded to the separation of the complex mixture presented in the textbook, but most of the students did not recognize it. Because the analysis of the teachers' and students' perceptions of the seven separate categories presented in the survey was not based on a clear conceptual perception of the separation of the mixture, but rather they tended to respond differently for each characteristic of each individual category, it was decided that it was necessary to present clearer examples of the separation of the mixture, so that the students could better understand the concept of separation of mixtures that could be somewhat abstract.

Macrobenthic Community Structure Along the Environmental Gradients of Ulsan Bay, Korea (울산만의 저서환경 구배에 따른 저서동물군집 구조)

  • Yoon, Sang-Pil;Jung, Rae-Hong;Kim, Youn-Jung;Kim, Seong-Gil;Choi, Min-Kyu;Lee, Won-Chan;Oh, Hyun-Taik;Hong, Sok-Jin
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.14 no.2
    • /
    • pp.102-117
    • /
    • 2009
  • This study was carried out to investigate the extent to which benthic environment of Ulsan Bay was disturbed by organic materials and trace metals from the megacity and industrial complex. Field survey for benthic environment and macroinvertebrate community was seasonally conducted from February to November 2006 at nine stations covering the inside and outside of the bay. TOC was on average 1.7% while four (As, Cu, Pb, Zn) out of seven trace metals measured exceeded the Effects Range Low (ERL) in most of the stations. Total number of species sampled was 199 and mean density was 4,578 ind./$m^2$, both of which were greatly dominated by the polychaetes. Dominant species were Aphelochaeta monilaris (22.6%), Ruditapes philippinarum (17.1%), Magelona japonica (12.2%), Lumbrineris longifolia (9.9%) and their distribution was ruled by the difference in the benthic environmental condition of each station. From the multivariate analyses, four stational groups were identified: northern part of the bay, middle and lower part of the bay, the intersection of Taewha River and Gosa stream and outside of the bay. As a result, the community heterogeneity of inner bay was much more greater than that of outer bay. SIMPER analysis showed that four groups were represented by R. philippinarum-Capitella capitata, A. monilaris-Balanoglossus carnosus, Sinocorophium sinensis-Cyathura higoensis and M. japonica-Ampharete arctica, respectively. Spatio-temporal changes of macroinvertebrate communities in Ulsan Bay were closely related to those of depth, mean grain size and organic content, and Zn was also a meaningful factor in that context.

Single Center Experience of the Balloon-Stent Technique for the Treatment of Unruptured Distal Internal Carotid Artery Aneurysms: Sharing a Simple and Reliable Tip to Use Scepter-Atlas Combination (원위내경동맥에 위치한 비파열성 동맥류의 치료에 있어 풍선-스텐트 테크닉에 대한 단일기관의 경험: Scepter-Atlas 조합을 사용하기 위한 간단하지만 확실한 방법)

  • Yu-jung Park;Jieun Roh;Seung Kug Baik;Jeong A Yeom;Chul-Hoo Kang;Hee Seok Jeong;Sang Won Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.5
    • /
    • pp.1258-1273
    • /
    • 2021
  • Purpose The balloon-stent technique (BST) has certain strengths as an assisted technique for the treatment of complex aneurysms. After Atlas release, the BST can be executed without an exchange maneuver of the balloon to the stent-delivery catheter. The purpose of this article is to share our experience with the BST using the Scepter-Atlas combination. Materials and Methods Device inspection led us to a simple method to avoid failure in loading Atlas to the Scepter. From March 2018 to December 2019, 57 unruptured distal internal carotid artery (dICA) aneurysms were treated with coil embolization; among which, 25 aneurysms in 23 patients were treated with BST. Clinical and angiographic data were retrospectively collected and reviewed. Results The technical success rate of the Scepter-Atlas combination increased from 50% to 100% after careful inspection. BST angiographic results were comparable to the stent-assisted coil (SAC) group treated during the immediately post-embolization same period (modified Raymond-Roy classification [MRRC] 1 & 2 84% in BST, 96.3% in SAC) and during short-term follow-up (MRRC 1 & 2 95.8% in BST, 88.4% in SAC). A small number of patients showed periprocedural complications, but none had clinical consequences. Conclusion BST using the Scepter-Atlas combination can provide an effective and safe method for the treatment of dICA aneurysms. Scepters can be used as delivery catheters for Atlas.

A Study on the Priority of RoboAdvisor Selection Factors: From the Perspective of Analyzing Differences between Users and Providers Using AHP (로보어드바이저 선정요인의 우선순위에 관한 연구: AHP를 이용한 사용자와 제공자의 차이분석 관점으로)

  • Young Woong Woo;Jae In Oh;Yun Hi Chang
    • Information Systems Review
    • /
    • v.25 no.2
    • /
    • pp.145-162
    • /
    • 2023
  • Asset management is a complex and difficult field that requires insight into numerous variables and even human psychology. Thus, it has traditionally been the domain of professionals, and these services have been expensive to obtain. Changes are taking place in these markets, and the driving force is the digital revolution, so-called the fourth industrial revolution. Among them, the Robo-Advisor service using artificial intelligence technology is the highlight. The reason is that it is possible to popularize investment advisory services with convenient accessibility and low cost. This study aims to clarify what factors are critically important when selecting robo-advisors for service users and providers in Korea, and what perception differences exist in the selection factors between user and provider groups. The framework of the study was based on the marketing mix 4C model, and the design and analysis of the model used Delphi survey and AHP. Through the study design, 4 main criteria and 15 sub-criteria were derived, and the findings of the study are as follows. First, the importance of the four main criteria was in the order of customer needs > customer convenience > customer cost > customer communication for both groups. Second, looking at the 15 sub-criteria, it was found that investment purpose coverage, investment propensity coverage, fee level and accessibility factors were the most important. Third, when comparing between groups, the user group found that the fee level and accessibility factors were the most important, and the provider group recognized the investment purpose coverage and investment propensity coverage factors as important. This study derived useful implications in practice. First, when designing for the spread of the robo-advisor service, the basis for constructing a user-oriented system was prepared by considering the priority of importance according to the weight difference between the four main criteria and the 15 sub-criteria. In addition, the difference in priority of each sub-criteria shown in the group comparison and the cause of the sub-criteria with large weight differences were identified. In addition, it was suggested that it is very important to form a consensus to resolve the difference in perception of factors between those in charge of strategy and marketing and system development within the provider group. Academically, it is meaningful in that it is an early study that presented various perspectives and perspectives by deriving a number of robo-advisor selection factors. Through the findings of this study, it is expected that a successful user-oriented robo-advisor system can be built and spread in Korea to help users.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Dose Planning of Forward Intensity Modulated Radiation Therapy for Nasopharyngeal Cancer using Compensating Filters (보상여과판을 이용한 비인강암의 전방위 강도변조 방사선치료계획)

  • Chu Sung Sil;Lee Sang-wook;Suh Chang Ok;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.53-65
    • /
    • 2001
  • Purpose : To improve the local control of patients with nasopharyngeal cancer, we have implemented 3-D conformal radiotherapy and forward intensity modulated radiation therapy (IMRT) to used of compensating filters. Three dimension conformal radiotherapy with intensity modulation is a new modality for cancer treatments. We designed 3-D treatment planning with 3-D RTP (radiation treatment planning system) and evaluation dose distribution with tumor control probability (TCP) and normal tissue complication probability (NTCP). Material and Methods : We have developed a treatment plan consisting four intensity modulated photon fields that are delivered through the compensating tilters and block transmission for critical organs. We get a full size CT imaging including head and neck as 3 mm slices, and delineating PTV (planning target volume) and surrounding critical organs, and reconstructed 3D imaging on the computer windows. In the planning stage, the planner specifies the number of beams and their directions including non-coplanar, and the prescribed doses for the target volume and the permissible dose of normal organs and the overlap regions. We designed compensating filter according to tissue deficit and PTV volume shape also dose weighting for each field to obtain adequate dose distribution, and shielding blocks weighting for transmission. Therapeutic gains were evaluated by numerical equation of tumor control probability and normal tissue complication probability. The TCP and NTCP by DVH (dose volume histogram) were compared with the 3-D conformal radiotherapy and forward intensity modulated conformal radiotherapy by compensator and blocks weighting. Optimization for the weight distribution was peformed iteration with initial guess weight or the even weight distribution. The TCP and NTCP by DVH were compared with the 3-D conformal radiotherapy and intensitiy modulated conformal radiotherapy by compensator and blocks weighting. Results : Using a four field IMRT plan, we have customized dose distribution to conform and deliver sufficient dose to the PTV. In addition, in the overlap regions between the PTV and the normal organs (spinal cord, salivary grand, pituitary, optic nerves), the dose is kept within the tolerance of the respective organs. We evaluated to obtain sufficient TCP value and acceptable NTCP using compensating filters. Quality assurance checks show acceptable agreement between the planned and the implemented MLC(multi-leaf collimator). Conclusion : IMRT provides a powerful and efficient solution for complex planning problems where the surrounding normal tissues place severe constraints on the prescription dose. The intensity modulated fields can be efficaciously and accurately delivered using compensating filters.

  • PDF