• Title/Summary/Keyword: Real Number

Search Result 4,513, Processing Time 0.035 seconds

Prediction of infectious diseases using multiple web data and LSTM (다중 웹 데이터와 LSTM을 사용한 전염병 예측)

  • Kim, Yeongha;Kim, Inhwan;Jang, Beakcheol
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.139-148
    • /
    • 2020
  • Infectious diseases have long plagued mankind, and predicting and preventing them has been a big challenge for mankind. For this reasen, various studies have been conducted so far to predict infectious diseases. Most of the early studies relied on epidemiological data from the Centers for Disease Control and Prevention (CDC), and the problem was that the data provided by the CDC was updated only once a week, making it difficult to predict the number of real-time disease outbreaks. However, with the emergence of various Internet media due to the recent development of IT technology, studies have been conducted to predict the occurrence of infectious diseases through web data, and most of the studies we have researched have been using single Web data to predict diseases. However, disease forecasting through a single Web data has the disadvantage of having difficulty collecting large amounts of learning data and making accurate predictions through models for recent outbreaks such as "COVID-19". Thus, we would like to demonstrate through experiments that models that use multiple Web data to predict the occurrence of infectious diseases through LSTM models are more accurate than those that use single Web data and suggest models suitable for predicting infectious diseases. In this experiment, we predicted the occurrence of "Malaria" and "Epidemic-parotitis" using a single web data model and the model we propose. A total of 104 weeks of NEWS, SNS, and search query data were collected, of which 75 weeks were used as learning data and 29 weeks were used as verification data. In the experiment we predicted verification data using our proposed model and single web data, Pearson correlation coefficient for the predicted results of our proposed model showed the highest similarity at 0.94, 0.86, and RMSE was also the lowest at 0.19, 0.07.

Development of Acquisition and Analysis System of Radar Information for Small Inshore and Coastal Fishing Vessels - Suppression of Radar Clutter by CFAR - (연근해 소형 어선의 레이더 정보 수록 및 해석 시스템 개발 - CFAR에 의한 레이더 잡음 억제 -)

  • 이대재;김광식;신형일;변덕수
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.39 no.4
    • /
    • pp.347-357
    • /
    • 2003
  • This paper describes on the suppression of sea clutter on marine radar display using a cell-averaging CFAR(constant false alarm rate) technique, and on the analysis of radar echo signal data in relation to the estimation of ARPA functions and the detection of the shadow effect in clutter returns. The echo signal was measured using a X -band radar, that is located on the Pukyong National University, with a horizontal beamwidth of $$3.9^{\circ}$$, a vertical beamwidth of $20^{\circ}$, pulsewidth of $0.8 {\mu}s$ and a transmitted peak power of 4 ㎾ The suppression performance of sea clutter was investigated for the probability of false alarm between $l0-^0.25;and; 10^-1.0$. Also the performance of cell averaging CFAR was compared with that of ideal fixed threshold. The motion vectors and trajectory of ships was extracted and the shadow effect in clutter returns was analyzed. The results obtained are summarized as follows;1. The ARPA plotting results and motion vectors for acquired targets extracted by analyzing the echo signal data were displayed on the PC based radar system and the continuous trajectory of ships was tracked in real time. 2. To suppress the sea clutter under noisy environment, a cell averaging CFAR processor having total CFAR window of 47 samples(20+20 reference cells, 3+3 guard cells and the cell under test) was designed. On a particular data set acquired at Suyong Man, Busan, Korea, when the probability of false alarm applied to the designed cell averaging CFAR processor was 10$^{-0}$.75/ the suppression performance of radar clutter was significantly improved. The results obtained suggest that the designed cell averaging CFAR processor was very effective in uniform clutter environments. 3. It is concluded that the cell averaging CF AR may be able to give a considerable improvement in suppression performance of uniform sea clutter compared to the ideal fixed threshold. 4. The effective height of target, that was estimated by analyzing the shadow effect in clutter returns for a number of range bins behind the target as seen from the radar antenna, was approximately 1.2 m and the information for this height can be used to extract the shape parameter of tracked target..

A Comparative Analysis of the Level of Occupational Health : Before and After the Subsidiary Program on Health Care Management of Small Scale Industries (영세사업장 보건관리 지원사업 실시 전후의 산업보건수준 비교 분석)

  • Jung, Hye Sun
    • Korean Journal of Occupational Health Nursing
    • /
    • v.4
    • /
    • pp.58-83
    • /
    • 1995
  • The small scale industries which have less than 30 employees occupy 86.5% of total number of industries in Korea. And though they have higher accident rate and lower environmental condition than big industries, it has been not mandatory to appointing health care manager at factory. So, from 1993, government subsidizes to the health care management of small industries. The purpose of this study is to identify the real feature of health care status in small industries, and to evaluate the level of health care management, before and after the subsidiary program. 65 small plating industries which have been managed by the same health care management support institution in 1993 were selected for study. Of the 65 industries, 3 which have not taken both environmental evaluation and health screening in 1994, and 9 which have closed were excluded from study sample. And the remaining 53 were analyzed by using the results of environmental evaluation and health screening, reported to the Ministry of Labor, before and after the subsidiary program, the analysis was done by the comparison of the two year paired data of the same industry. Over-permissible-limit rate, health screening implementation rate, above grade C rate were calculated and compared. The status of health care management ; 1. Of the sample industries, 96.9% provide protective equipment and 80.0% set up ventilating system. Protective gloves (89.2%) and protective clothing (80.0%) are widely provided, but ear plugs (4.6%) are rarely provided. 21.5% of the protective equipment are well put on, and 40.4% of the ventilating systems function well. 2. In 1993, 35 industries, 53.8% of the sample, checked working environment twice. Over-permissible-limit rates of heavy metal (12.2%), suspended particle (11.1%), noise (5.5%) were high. To put on protective equipment and to set up local ventilating system were pointed out by the examiners. 3. General health screening was done at 63.1% of the sample industries and 35.3% of total workers were examined. Specific health screening was done at 93.8% of the sample industries and 75.4% of workers were examined. 15.5% of workers was provided to be above grade C and to have digestive system disease (43.3%), circulatory disease (18.9%), and hematopoietic disease (14.2%), etc. 4. In 1993, the subsidiary program of health care management was provided in forms of health education, health counseling, and rounding check of working field. And 61.5%, 83.0%, 55.4% of sample industries respectively received it. The average visit per industry was 1.8. Comparisons of the level of occupational health before and after the subsidiary program ; 1. Over-permissible-limit rates of hazardous factors of 1993 and that of 1994 were compared. The rates of suspended particle, noise, organic solvent of 1994 (37.5%, 13.4%, 24.2% respectively) were higher than that of 1993 (25.0%, 6.0%, 6.3% respectively). In the case of acid, there was no difference between the rate of 1993 and that of 1994. Only the rate of heavy metal decreased from 12.9% in 1993 to 3.0% in 1994. 2. General health screening was done at 38.7% of the sample industries in 1993 and at 44.6% in 1994. But the implementation rate of specific health screening decreased from 72.4% in 1993 to 64.6% in 1994. 3. The implementation rate of specific health screening was analyzed by some health factors. The rate of suspended particle increased from 61.8% in 1993 to 91.2% in 1994. But the rates of the others-noise, organic solvent, heavy metal, specific chemical substances-decreased. 4. Above grade C rate in health screening increased from 27.8% in 1993 to 35.5% in 1994. But that of endocrine disorders and pulmonary disease decreased.

  • PDF

Study on the Rice Yield Reduction and Over head Flooding Depth for Design of Drainage System (배수 설계를 위한 벼의 관수심 및 관수피해율에 관한 연구)

  • 김천환;김시원
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.24 no.4
    • /
    • pp.69-79
    • /
    • 1982
  • The objective of this study is to contribute to drainage planning in the most realistic and economical way by establishing the relationship between rice yield reduction and overhead flooding by muddy water of each growth stage of paddy, which is the most important factor in determining optimum drainage facilities. This study was based on the data mainly from the experimental reports of the Office of Rural Development of Korea, Reduction Rate Estimation for Summer Crops, published by Ministry of Agriculture and Forestry of Japan and other related research documenta- tion. The results of this study are summarized as follows 1. Damages by overhead flooding are highest in heading stage and have the tendency of decrease in the order of booting stage, panicle formation stage, tillering stage, and stage just after transplanting. Damages by overhead flooding of each growing stage are as follows: a) It is considered that overhead flooding just after transplanting gives a little influence on plant growth and yield because the paddy has sufficient growth period from floo ding to harvest time. b) Jt is analyzed that according to the equation y=11 12x 0.908 which is derived from this study, damages by overhead flooding during tillering stage for 1, 2, 3 successive days are 11.1 %, 20.9%, and 30.2% respectively. c) Damages by overhead flooding after panicle formation stage are very serious because recovering period is very short after damage and ineffective tillering is much. Acc- ording to the equation y=9. 58x+10. Ol derived from this study, damages by overhead flooding fal 1,2,3,5 successive days are 19.6%, 29.2%, 38.8%, 57.9% respectively. d) Booting stage is the very important period in which young panicle has grown up almost completely and the number of glumous flower is fixed since reduction division takes place in the microspore mother cell and enbryo mother cell. According to the equation y=39. 66x 0.558 derived from this study, damages by overhead floodingfor 0.5, 1, 3, 5 successive days are 26.9%, 39.7%, 72. 2% and 97.4%, respectively. Therefore, damages by overhead flooding is very serious during the hooting stage. e) When ear of paddy emerges, flowering begins on that day or the next day; when paddy flowers, fertilization will be completed 2-3 hours after flowering. Therefore overhead flooding during heading stage impedes flowering and increases sterilizing percentage. From this reason damages of heading stage are larger than that of booting stage. According to the equation y-41 94x 0.589 derived from this study, damages by overhead flooding for 0.5, 1, 3, 5, successive days are 27.9%, 63.1 %, 80.1%, and 100% 2. Considering that temperature of booting stage is higher than that of beading stage and plant height of booting stage is ten centimeters shorter than that of heading stage, booting stage should be taken as a critical period for drainage planning because possi- bility of damage occurrence in booting stage is larger than that of heading stage. There-fore, it is considered that booting stage should be taken as critical period of paddy growth for drainage planning. 3. Overhead flooding depth is different depending on the stage of growth. In case, booting stage is adopted as design stage of growth for drainage planning, it is conside red that the allowable flooding depth for new varieties and general varieties are 70cm and 80cm respectively. 4. Reduction Rate Estimation by Wind and Flood for Rice Planting of the present design criteria for drainage planning shows damage by overhead flooding for 1 to 2, 3 to 4, 5 to 7 consecutive days; damages by overhead flooding varies considerably over several hours and experimental condition of soil, variety of paddy, and climate differs with real situation. From these reasons, damage by flooding could not be estimated properly in the past. This study has derived the equation which shows damages by flooding of each growth stage on an hourly basis. Therefore, it has become possible to compute the exact damages in case duration of overhead flooding is known.

  • PDF

Crosshole EM 2.5D Modeling by the Extended Born Approximation (확장된 Born 근사에 의한 시추공간 전자탐사 2.5차원 모델링)

  • Cho, In-Ky;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.2
    • /
    • pp.127-135
    • /
    • 1998
  • The Born approximation is widely used for solving the complex scattering problems in electromagnetics. Approximating total internal electric field by the background field is reasonable for small material contrasts as long as scatterer is not too large and the frequency is not too high. However in many geophysical applications, moderate and high conductivity contrasts cause both real and imaginary part of internal electric field to differ greatly from background. In the extended Born approximation, which can improve the accuracy of Born approximation dramatically, the total electric field in the integral over the scattering volume is approximated by the background electric field projected to a depolarization tensor. The finite difference and elements methods are usually used in EM scattering problems with a 2D model and a 3D source, due to their capability for simulating complex subsurface conductivity distributions. The price paid for a 3D source is that many wavenumber domain solutions and their inverse Fourier transform must be computed. In these differential equation methods, all the area including homogeneous region should be discretized, which increases the number of nodes and matrix size. Therefore, the differential equation methods need a lot of computing time and large memory. In this study, EM modeling program for a 2D model and a 3D source is developed, which is based on the extended Born approximation. The solution is very fast and stable. Using the program, crosshole EM responses with a vertical magnetic dipole source are obtained and the results are compared with those of 3D integral equation solutions. The agreement between the integral equation solution and extended Born approximation is remarkable within the entire frequency range, but degrades with the increase of conductivity contrast between anomalous body and background medium. The extended Born approximation is accurate in the case conductivity contrast is lower than 1:10. Therefore, the location and conductivity of the anomalous body can be estimated effectively by the extended Born approximation although the quantitative estimate of conductivity is difficult for the case conductivity contrast is too high.

  • PDF

Numerical Test for the 2D Q Tomography Inversion Based on the Stochastic Ground-motion Model (추계학적 지진동모델에 기반한 2D Q 토모그래피 수치모델 역산)

  • Yun, Kwan-Hee;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.3
    • /
    • pp.191-202
    • /
    • 2007
  • To identify the detailed attenuation structure in the southern Korean Peninsula, a numerical test was conducted for the Q tomography inversion to be applied to the accumulated dataset until 2005. In particular, the stochastic pointsource ground-motion model (STGM model; Boore, 2003) was adopted for the 2D Q tomography inversion for direct application to simulating the strong ground-motion. Simultaneous inversion of the STGM model parameters with a regional single Q model was performed to evaluate the source and site effects which were necessary to generate an artificial dataset for the numerical test. The artificial dataset consists of simulated Fourier spectra that resemble the real data in the magnitude-distance-frequency-error distribution except replacement of the regional single Q model with a checkerboard type of high and low values of laterally varying Q models. The total number of Q blocks used for the checkerboard test was 75 (grid size of $35{\times}44km^2$ for Q blocks); Q functional form of $Q_0f^{\eta}$ ($Q_0$=100 or 500, 0.0 < ${\eta}$ < 1.0) was assigned to each Q block for the checkerboard test. The checkerboard test has been implemented in three steps. At the first step, the initial values of Q-values for 75 blocks were estimated. At the second step, the site amplification function was estimated by using the initial guess of A(f) which is the mean site amplification functions (Yun and Suh, 2007) for the site class. The last step is to invert the tomographic Q-values of 75 blocks based on the results of the first and second steps. As a result of the checkerboard test, it was demonstrated that Q-values could be robustly estimated by using the 2D Q tomography inversion method even in the presence of perturbed source and site effects from the true input model.

"Liability of Air Carriers for Injuries Resulting from International Aviation Terrorism" (국제항공(國際航空)테러리즘으로 인한 여객손해(旅客損害)에 대한 운송인(運送人)의 책임(責任))

  • Choi, Wan-Sik
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.1
    • /
    • pp.47-85
    • /
    • 1989
  • The Fundamental purpose of the Warsaw Convention was to establish uniform rules applicable to international air transportation. The emphasis on the benefits of uniformity was considered important in the beginning and continues to be important to the present. If the desire for uniformity is indeed the mortar which holds the Warsaw system together then it should be possible to agree on a worldwide liability limit. This liability limit would not be so unreasonable, that it would be impossible for nations to adhere to it. It would preclude any national supplemental compensation plan or Montreal Agreement type of requirement in any jurisdiction. The differentiation of liability limits by national requirement seems to be what is occurring. There is a plethora of mandated limits and Montreal Agreement type 'voluntary' limits. It is becoming difficult to find more than a few major States where an unmodified Warsaw Convention or Hague Protocol limitation is still in effect. If this is the real world in the 1980's, then let the treaty so reflect it. Upon reviewing the Warsaw Convention, its history and the several attempts to amend it, strengths become apparent. Hijackings of international flights have given rise to a number of lawsuits by passengers to recover damages for injuries suffered. This comment is concerned with the liability of an airline for injuries to its passengers resulting from aviation terrorism. In addition, analysis is focused on current airline security measures, particularly the pre-boarding screening system, and the duty of air carriers to prevent weapons from penetrating that system. An airline has a duty to exercise a high degree of care to protect its passengers from the threat of aviation terrorism. This duty would seemingly require the airline to exercise a high degree of care to prevent any passenger from smuggling a weapon or explosive device aboard its aircraft. In the case an unarmed hijacker who boards having no instrument in his possession with which to promote the hoax, a plaintiff-passenger would be hard-pressed to show that the airline was negligent in screening the hijacker prior to boarding. In light of the airline's duty to exercise a high degree of care to provide for the safety of all the passengers on board, an acquiescene to a hijacker's demands on the part of the air carrier could constitute a breach of duty only when it is clearly shown that the carrier's employees knew or plainly should have known that the hijacker was unarmed. A finding of willful misconduct on the part of an air carrier, which is a prerequisite to imposing unlimited liability, remains a question to be determined by a jury using the definition or standard of willful misconduct prevailing in the jurisdiction of the forum court. Through the willful misconduct provision of the Warsaw Convention, air carrier face the possibility of unlimited liability for failure to implement proper preventive precautions against terrorist. Courts, therefore, should broadly construe the willful misconduct provision of the Warsaw Convention in order to find unlimited liability for passenger injuries whenever air carrier security precautions are lacking. In this way, the courts can help ensure air carrier safety and prevention against terrorist attack. Air carriers, therefore, would have an incentive to increase, impose and maintain security precautions designed to thwart such potential terrorist attacks as in the case of Korean Air Lines Flight No.858 incident having a tremendous impact on the civil aviation community. The crash of a commercial airliner, with the attending tragic loss of life and massive destruction of property, always gives rise to shock and indignation. The general opinion is that the legal system could be sufficient, provided that the political will is there to use and apply it effectively. All agreed that the main responsibility for security has to be borne by the governments. I would like to remind all passengers that every discovery of the human spirit may be used for opposite ends; thus, aircraft can be used for air travel but also as targets of terrorism. A state that supports aviation terrorism is responsible for violation of International Aviation Law. Generally speaking, terrorism is a violation of international law. It violates the soverign rights of the states, and the human rights of the individuals. I think that aviation terrorism as becoming an ever more serious issue, has to be solved by internationally agreed and closely co-ordinated measures. We have to contribute more to the creation of a general consensus amongst all states about the need to combat the threat of aviation terrorism.

  • PDF

Market Structure Analysis of Automobile Market in U.S.A (미국자동차시장의 구조분석)

  • Choi, In-Hye;Lee, Seo-Goo;Yi, Seong-Keun
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.141-156
    • /
    • 2008
  • Market structure analysis is a very useful tool to analyze the competition boundary of the brand or the company. But most of the studies in market structure analysis, the concern lies in nondurable goods such as candies, soft drink and etc. because of the their availability of the data. In the field of durable goods, the limitation of the data availability and the repurchase time period constrain the study. In the analysis of the automobile market, those of views might be more persuasive. The purpose of this study is to analyze the structure of automobile market based on some idea suggested by prior studies. Usually the buyers of the automobile tend to buy upper tier when they buy in the next time. That kind of behavior make it impossible to analyze the structure of automobile market under the level of automobile model. For that reason I tried to analyze the market structure in the brand or company level. In this study, consideration data was used for market structure analysis. The reasons why we used the consideration data are summarized as following. Firstly, as the repurchase time cycle is too long, brand switching data which is used for the market analysis of nondurable good is not avaliable. Secondly, as we mentioned, the buyers of the automobile tend to buy upper tier when they buy in the next time. We used survey data collected in the U.S.A. market in the year of 2005 through questionaire. The sample size was 8,291. The number of brand analyzed in this study was 9 among 37 which was being sold in U.S.A. market. Their market share was around 50%. The brands considered were BMW, Chevrolet, Chrysler, Dodge, Ford, Honda, Mercedes, and Toyota. �� ratio was derived from frequency of the consideration set. Actually the frequency is different from the brand switch concept. In this study to compute the �� ratio, the frequency of the consideration set was used like a frequency of brand switch for convenience. The study can be divided into 2 steps. The first step is to build hypothetical market structures. The second step is to choose the best structure based on the hypothetical market structures, Usually logit analysis is used for the choice best structure. In this study we built 3 hypothetical market structure. They are type-cost, cost-type, and unstructured. We classified the automobile into 5 types, sedan, SUV(Sport Utility Vehicle), Pickup, Mini Van, and Full-size Van. As for purchasing cost, we classified it 2 groups based on the median value. The median value was $28,800. To decide best structure among them, maximum likelihood test was used. Resulting from market structure analysis, we find that the automobile market of USA is hierarchically structured in the form of 'automobile type - purchasing cost'. That is, result showed that automobile buyers considered function or usage first and purchasing cost next. This study has some limitations in the analysis level and variable selection. First, in this study only type of the automobile and purchasing cost were as attributes considered for purchase. Considering other attributes is very needful. Because of the attributes considered, only 3 hypothetical structure could be analyzed. Second, due to the data, brand level analysis was tried. But model level analysis would be better because automobile buyers consider model not brand. To conduct model level study more cases should be obtained. That is for acquiring the better practical meaning, brand level analysis should be conducted when we consider the actual competition which occurred in the real market. Third, the variable selection for building nested logit model was very limited to some avaliable data. In spite of those limitations, the importance of this study lies in the trial of market structure analysis of durable good.

  • PDF

Ontology-based Course Mentoring System (온톨로지 기반의 수강지도 시스템)

  • Oh, Kyeong-Jin;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.149-162
    • /
    • 2014
  • Course guidance is a mentoring process which is performed before students register for coming classes. The course guidance plays a very important role to students in checking degree audits of students and mentoring classes which will be taken in coming semester. Also, it is intimately involved with a graduation assessment or a completion of ABEEK certification. Currently, course guidance is manually performed by some advisers at most of universities in Korea because they have no electronic systems for the course guidance. By the lack of the systems, the advisers should analyze each degree audit of students and curriculum information of their own departments. This process often causes the human error during the course guidance process due to the complexity of the process. The electronic system thus is essential to avoid the human error for the course guidance. If the relation data model-based system is applied to the mentoring process, then the problems in manual way can be solved. However, the relational data model-based systems have some limitations. Curriculums of a department and certification systems can be changed depending on a new policy of a university or surrounding environments. If the curriculums and the systems are changed, a scheme of the existing system should be changed in accordance with the variations. It is also not sufficient to provide semantic search due to the difficulty of extracting semantic relationships between subjects. In this paper, we model a course mentoring ontology based on the analysis of a curriculum of computer science department, a structure of degree audit, and ABEEK certification. Ontology-based course guidance system is also proposed to overcome the limitation of the existing methods and to provide the effectiveness of course mentoring process for both of advisors and students. In the proposed system, all data of the system consists of ontology instances. To create ontology instances, ontology population module is developed by using JENA framework which is for building semantic web and linked data applications. In the ontology population module, the mapping rules to connect parts of degree audit to certain parts of course mentoring ontology are designed. All ontology instances are generated based on degree audits of students who participate in course mentoring test. The generated instances are saved to JENA TDB as a triple repository after an inference process using JENA inference engine. A user interface for course guidance is implemented by using Java and JENA framework. Once a advisor or a student input student's information such as student name and student number at an information request form in user interface, the proposed system provides mentoring results based on a degree audit of current student and rules to check scores for each part of a curriculum such as special cultural subject, major subject, and MSC subject containing math and basic science. Recall and precision are used to evaluate the performance of the proposed system. The recall is used to check that the proposed system retrieves all relevant subjects. The precision is used to check whether the retrieved subjects are relevant to the mentoring results. An officer of computer science department attends the verification on the results derived from the proposed system. Experimental results using real data of the participating students show that the proposed course guidance system based on course mentoring ontology provides correct course mentoring results to students at all times. Advisors can also reduce their time cost to analyze a degree audit of corresponding student and to calculate each score for the each part. As a result, the proposed system based on ontology techniques solves the difficulty of mentoring methods in manual way and the proposed system derive correct mentoring results as human conduct.

Dose Distribution and Design of Dynamic Wedge Filter for 3D Conformal Radiotherapy (방사선 입체조형치료를 위한 동적쐐기여과판의 고안과 조직내 선량분포 특성)

  • 추성실
    • Progress in Medical Physics
    • /
    • v.9 no.2
    • /
    • pp.77-88
    • /
    • 1998
  • Wedge shaped isodoses are desired in a number of clinical situations. Hard wedge filters have provided nominal angled isodoses with dosimetric consequences of beam hardening, increased peripheral dosing, nonidealized gradients at deep depths along with the practical consequendes of filter handling and placement problems. Dynamic wedging uses a combination of a moving collimator and changing monitor dose to achieve angled isodoses. The segmented treatment tables(STT) that monitor unit setting by every distance of moving collimator, was induced by numerical formular. The characteristics of dynamic wedge by STT compared with real dosimetry. Methods and Materials : The accelerator CLINAC 2100C/D at Yonsei Cancer Center has two photon energies (6MV and 10MV), currently with dynamic wedge angles of 15$^{\circ}$, 30$^{\circ}$, 45$^{\circ}$ and 60$^{\circ}$. The segmented treatment tables(STT) that drive the collimator in concert with a changing monitor unit are unique for field sizes ranging from 4.0cm to 20.0cm in 0.5cm steps. Transmission wedge factors were measured for each STT with an standard ion chamber. Isodose profiles, isodose curves, percentage depth dose for dynamic wedge filters were measured with film dosimetry. Dynamic wedge angle by STT was well coincident with film dosimetry. Percent depth doses were found to be closer to open field but more shallow than hard wedge filter. The wedge transmission factor were decreased by increased the wedge angle and more higher than hard wedge filters. Dynamic wedging probided more consistent gradients across the field compared with hard wedge filters. Dynamic wedging has practical and dosimetric advantages over hard filters for rapid setup and keeping from table collisions. Dynamic wedge filters are positive replacement for hard filters and introduction of dynamic conformal radiotherapy and intensity modulation radiotherapy in a future.

  • PDF